2 research outputs found

    Sparse Quantized Spectral Clustering

    Full text link
    Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations. In particular, we show that very little change occurs in the informative eigenstructure even under drastic sparsification/quantization, and consequently that very little downstream performance loss occurs with very aggressively sparsified or quantized spectral clustering. We illustrate how these results depend on the nonlinearity, we characterize a phase transition beyond which spectral clustering becomes possible, and we show when such nonlinear transformations can introduce spurious non-informative eigenvectors

    A kernel random matrix-based approach for sparse PCA

    No full text
    International audienceIn this paper, we present a random matrix approach to recover sparse principal components from n p-dimensional vectors. Specifically, considering the large dimensional setting where n, p → ∞ with p/n → c ∈ (0, ∞) and under Gaussian vector observations, we study kernel random matrices of the type f (Ĉ), where f is a three-times continuously differentiable function applied entry-wise to the sample covariance matrixĈ of the data. Then, assuming that the principal components are sparse, we show that taking f in such a way that f (0) = f (0) = 0 allows for powerful recovery of the principal components, thereby generalizing previous ideas involving more specific f functions such as the soft-thresholding function
    corecore