11,333 research outputs found
Information-theoretically Optimal Sparse PCA
Sparse Principal Component Analysis (PCA) is a dimensionality reduction
technique wherein one seeks a low-rank representation of a data matrix with
additional sparsity constraints on the obtained representation. We consider two
probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart
(or spiked covariance) model. We analyze an Approximate Message Passing (AMP)
algorithm to estimate the underlying signal and show, in the high dimensional
limit, that the AMP estimates are information-theoretically optimal. As an
immediate corollary, our results demonstrate that the posterior expectation of
the underlying signal, which is often intractable to compute, can be obtained
using a polynomial-time scheme. Our results also effectively provide a
single-letter characterization of the sparse PCA problem.Comment: 5 pages, 1 figure, conferenc
Phase Transitions in Sparse PCA
We study optimal estimation for sparse principal component analysis when the
number of non-zero elements is small but on the same order as the dimension of
the data. We employ approximate message passing (AMP) algorithm and its state
evolution to analyze what is the information theoretically minimal mean-squared
error and the one achieved by AMP in the limit of large sizes. For a special
case of rank one and large enough density of non-zeros Deshpande and Montanari
[1] proved that AMP is asymptotically optimal. We show that both for low
density and for large rank the problem undergoes a series of phase transitions
suggesting existence of a region of parameters where estimation is information
theoretically possible, but AMP (and presumably every other polynomial
algorithm) fails. The analysis of the large rank limit is particularly
instructive.Comment: 6 pages, 3 figure
Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula
Factorizing low-rank matrices has many applications in machine learning and
statistics. For probabilistic models in the Bayes optimal setting, a general
expression for the mutual information has been proposed using heuristic
statistical physics computations, and proven in few specific cases. Here, we
show how to rigorously prove the conjectured formula for the symmetric rank-one
case. This allows to express the minimal mean-square-error and to characterize
the detectability phase transitions in a large set of estimation problems
ranging from community detection to sparse PCA. We also show that for a large
set of parameters, an iterative algorithm called approximate message-passing is
Bayes optimal. There exists, however, a gap between what currently known
polynomial algorithms can do and what is expected information theoretically.
Additionally, the proof technique has an interest of its own and exploits three
essential ingredients: the interpolation method introduced in statistical
physics by Guerra, the analysis of the approximate message-passing algorithm
and the theory of spatial coupling and threshold saturation in coding. Our
approach is generic and applicable to other open problems in statistical
estimation where heuristic statistical physics predictions are available
Information-theoretic bounds and phase transitions in clustering, sparse PCA, and submatrix localization
We study the problem of detecting a structured, low-rank signal matrix
corrupted with additive Gaussian noise. This includes clustering in a Gaussian
mixture model, sparse PCA, and submatrix localization. Each of these problems
is conjectured to exhibit a sharp information-theoretic threshold, below which
the signal is too weak for any algorithm to detect. We derive upper and lower
bounds on these thresholds by applying the first and second moment methods to
the likelihood ratio between these "planted models" and null models where the
signal matrix is zero. Our bounds differ by at most a factor of root two when
the rank is large (in the clustering and submatrix localization problems, when
the number of clusters or blocks is large) or the signal matrix is very sparse.
Moreover, our upper bounds show that for each of these problems there is a
significant regime where reliable detection is information- theoretically
possible but where known algorithms such as PCA fail completely, since the
spectrum of the observed matrix is uninformative. This regime is analogous to
the conjectured 'hard but detectable' regime for community detection in sparse
graphs.Comment: For sparse PCA and submatrix localization, we determine the
information-theoretic threshold exactly in the limit where the number of
blocks is large or the signal matrix is very sparse based on a conditional
second moment method, closing the factor of root two gap in the first versio
Submodular Load Clustering with Robust Principal Component Analysis
Traditional load analysis is facing challenges with the new electricity usage
patterns due to demand response as well as increasing deployment of distributed
generations, including photovoltaics (PV), electric vehicles (EV), and energy
storage systems (ESS). At the transmission system, despite of irregular load
behaviors at different areas, highly aggregated load shapes still share similar
characteristics. Load clustering is to discover such intrinsic patterns and
provide useful information to other load applications, such as load forecasting
and load modeling. This paper proposes an efficient submodular load clustering
method for transmission-level load areas. Robust principal component analysis
(R-PCA) firstly decomposes the annual load profiles into low-rank components
and sparse components to extract key features. A novel submodular cluster
center selection technique is then applied to determine the optimal cluster
centers through constructed similarity graph. Following the selection results,
load areas are efficiently assigned to different clusters for further load
analysis and applications. Numerical results obtained from PJM load demonstrate
the effectiveness of the proposed approach.Comment: Accepted by 2019 IEEE PES General Meeting, Atlanta, G
- …