2,701 research outputs found
In-network Sparsity-regularized Rank Minimization: Algorithms and Applications
Given a limited number of entries from the superposition of a low-rank matrix
plus the product of a known fat compression matrix times a sparse matrix,
recovery of the low-rank and sparse components is a fundamental task subsuming
compressed sensing, matrix completion, and principal components pursuit. This
paper develops algorithms for distributed sparsity-regularized rank
minimization over networks, when the nuclear- and -norm are used as
surrogates to the rank and nonzero entry counts of the sought matrices,
respectively. While nuclear-norm minimization has well-documented merits when
centralized processing is viable, non-separability of the singular-value sum
challenges its distributed minimization. To overcome this limitation, an
alternative characterization of the nuclear norm is adopted which leads to a
separable, yet non-convex cost minimized via the alternating-direction method
of multipliers. The novel distributed iterations entail reduced-complexity
per-node tasks, and affordable message passing among single-hop neighbors.
Interestingly, upon convergence the distributed (non-convex) estimator provably
attains the global optimum of its centralized counterpart, regardless of
initialization. Several application domains are outlined to highlight the
generality and impact of the proposed framework. These include unveiling
traffic anomalies in backbone networks, predicting networkwide path latencies,
and mapping the RF ambiance using wireless cognitive radios. Simulations with
synthetic and real network data corroborate the convergence of the novel
distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces
A D.C. Programming Approach to the Sparse Generalized Eigenvalue Problem
In this paper, we consider the sparse eigenvalue problem wherein the goal is
to obtain a sparse solution to the generalized eigenvalue problem. We achieve
this by constraining the cardinality of the solution to the generalized
eigenvalue problem and obtain sparse principal component analysis (PCA), sparse
canonical correlation analysis (CCA) and sparse Fisher discriminant analysis
(FDA) as special cases. Unlike the -norm approximation to the
cardinality constraint, which previous methods have used in the context of
sparse PCA, we propose a tighter approximation that is related to the negative
log-likelihood of a Student's t-distribution. The problem is then framed as a
d.c. (difference of convex functions) program and is solved as a sequence of
convex programs by invoking the majorization-minimization method. The resulting
algorithm is proved to exhibit \emph{global convergence} behavior, i.e., for
any random initialization, the sequence (subsequence) of iterates generated by
the algorithm converges to a stationary point of the d.c. program. The
performance of the algorithm is empirically demonstrated on both sparse PCA
(finding few relevant genes that explain as much variance as possible in a
high-dimensional gene dataset) and sparse CCA (cross-language document
retrieval and vocabulary selection for music retrieval) applications.Comment: 40 page
Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization
Principal component analysis (PCA) is widely used for dimensionality
reduction, with well-documented merits in various applications involving
high-dimensional data, including computer vision, preference measurement, and
bioinformatics. In this context, the fresh look advocated here permeates
benefits from variable selection and compressive sampling, to robustify PCA
against outliers. A least-trimmed squares estimator of a low-rank bilinear
factor analysis model is shown closely related to that obtained from an
-(pseudo)norm-regularized criterion encouraging sparsity in a matrix
explicitly modeling the outliers. This connection suggests robust PCA schemes
based on convex relaxation, which lead naturally to a family of robust
estimators encompassing Huber's optimal M-class as a special case. Outliers are
identified by tuning a regularization parameter, which amounts to controlling
sparsity of the outlier matrix along the whole robustification path of (group)
least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its
neat ties to robust statistics, the developed outlier-aware PCA framework is
versatile to accommodate novel and scalable algorithms to: i) track the
low-rank signal subspace robustly, as new data are acquired in real time; and
ii) determine principal components robustly in (possibly) infinite-dimensional
feature spaces. Synthetic and real data tests corroborate the effectiveness of
the proposed robust PCA schemes, when used to identify aberrant responses in
personality assessment surveys, as well as unveil communities in social
networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin
Relaxed Majorization-Minimization for Non-smooth and Non-convex Optimization
We propose a new majorization-minimization (MM) method for non-smooth and
non-convex programs, which is general enough to include the existing MM
methods. Besides the local majorization condition, we only require that the
difference between the directional derivatives of the objective function and
its surrogate function vanishes when the number of iterations approaches
infinity, which is a very weak condition. So our method can use a surrogate
function that directly approximates the non-smooth objective function. In
comparison, all the existing MM methods construct the surrogate function by
approximating the smooth component of the objective function. We apply our
relaxed MM methods to the robust matrix factorization (RMF) problem with
different regularizations, where our locally majorant algorithm shows
advantages over the state-of-the-art approaches for RMF. This is the first
algorithm for RMF ensuring, without extra assumptions, that any limit point of
the iterates is a stationary point.Comment: AAAI1
- β¦