135 research outputs found
In-network Sparsity-regularized Rank Minimization: Algorithms and Applications
Given a limited number of entries from the superposition of a low-rank matrix
plus the product of a known fat compression matrix times a sparse matrix,
recovery of the low-rank and sparse components is a fundamental task subsuming
compressed sensing, matrix completion, and principal components pursuit. This
paper develops algorithms for distributed sparsity-regularized rank
minimization over networks, when the nuclear- and -norm are used as
surrogates to the rank and nonzero entry counts of the sought matrices,
respectively. While nuclear-norm minimization has well-documented merits when
centralized processing is viable, non-separability of the singular-value sum
challenges its distributed minimization. To overcome this limitation, an
alternative characterization of the nuclear norm is adopted which leads to a
separable, yet non-convex cost minimized via the alternating-direction method
of multipliers. The novel distributed iterations entail reduced-complexity
per-node tasks, and affordable message passing among single-hop neighbors.
Interestingly, upon convergence the distributed (non-convex) estimator provably
attains the global optimum of its centralized counterpart, regardless of
initialization. Several application domains are outlined to highlight the
generality and impact of the proposed framework. These include unveiling
traffic anomalies in backbone networks, predicting networkwide path latencies,
and mapping the RF ambiance using wireless cognitive radios. Simulations with
synthetic and real network data corroborate the convergence of the novel
distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces
Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
Given the superposition of a low-rank matrix plus the product of a known fat
compression matrix times a sparse matrix, the goal of this paper is to
establish deterministic conditions under which exact recovery of the low-rank
and sparse components becomes possible. This fundamental identifiability issue
arises with traffic anomaly detection in backbone networks, and subsumes
compressed sensing as well as the timely low-rank plus sparse matrix recovery
tasks encountered in matrix decomposition problems. Leveraging the ability of
- and nuclear norms to recover sparse and low-rank matrices, a convex
program is formulated to estimate the unknowns. Analysis and simulations
confirm that the said convex program can recover the unknowns for sufficiently
low-rank and sparse enough components, along with a compression matrix
possessing an isometry property when restricted to operate on sparse vectors.
When the low-rank, sparse, and compression matrices are drawn from certain
random ensembles, it is established that exact recovery is possible with high
probability. First-order algorithms are developed to solve the nonsmooth convex
optimization problem with provable iteration complexity guarantees. Insightful
tests with synthetic and real network data corroborate the effectiveness of the
novel approach in unveiling traffic anomalies across flows and time, and its
ability to outperform existing alternatives.Comment: 38 pages, submitted to the IEEE Transactions on Information Theor
Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization
Principal component analysis (PCA) is widely used for dimensionality
reduction, with well-documented merits in various applications involving
high-dimensional data, including computer vision, preference measurement, and
bioinformatics. In this context, the fresh look advocated here permeates
benefits from variable selection and compressive sampling, to robustify PCA
against outliers. A least-trimmed squares estimator of a low-rank bilinear
factor analysis model is shown closely related to that obtained from an
-(pseudo)norm-regularized criterion encouraging sparsity in a matrix
explicitly modeling the outliers. This connection suggests robust PCA schemes
based on convex relaxation, which lead naturally to a family of robust
estimators encompassing Huber's optimal M-class as a special case. Outliers are
identified by tuning a regularization parameter, which amounts to controlling
sparsity of the outlier matrix along the whole robustification path of (group)
least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its
neat ties to robust statistics, the developed outlier-aware PCA framework is
versatile to accommodate novel and scalable algorithms to: i) track the
low-rank signal subspace robustly, as new data are acquired in real time; and
ii) determine principal components robustly in (possibly) infinite-dimensional
feature spaces. Synthetic and real data tests corroborate the effectiveness of
the proposed robust PCA schemes, when used to identify aberrant responses in
personality assessment surveys, as well as unveil communities in social
networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin
A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization
In this paper, we propose a convergent parallel best-response algorithm with
the exact line search for the nondifferentiable nonconvex sparsity-regularized
rank minimization problem. On the one hand, it exhibits a faster convergence
than subgradient algorithms and block coordinate descent algorithms. On the
other hand, its convergence to a stationary point is guaranteed, while ADMM
algorithms only converge for convex problems. Furthermore, the exact line
search procedure in the proposed algorithm is performed efficiently in
closed-form to avoid the meticulous choice of stepsizes, which is however a
common bottleneck in subgradient algorithms and successive convex approximation
algorithms. Finally, the proposed algorithm is numerically tested.Comment: Submitted to IEEE ICASSP 201
Load curve data cleansing and imputation via sparsity and low rank
The smart grid vision is to build an intelligent power network with an
unprecedented level of situational awareness and controllability over its
services and infrastructure. This paper advocates statistical inference methods
to robustify power monitoring tasks against the outlier effects owing to faulty
readings and malicious attacks, as well as against missing data due to privacy
concerns and communication errors. In this context, a novel load cleansing and
imputation scheme is developed leveraging the low intrinsic-dimensionality of
spatiotemporal load profiles and the sparse nature of "bad data.'' A robust
estimator based on principal components pursuit (PCP) is adopted, which effects
a twofold sparsity-promoting regularization through an -norm of the
outliers, and the nuclear norm of the nominal load profiles. Upon recasting the
non-separable nuclear norm into a form amenable to decentralized optimization,
a distributed (D-) PCP algorithm is developed to carry out the imputation and
cleansing tasks using networked devices comprising the so-termed advanced
metering infrastructure. If D-PCP converges and a qualification inequality is
satisfied, the novel distributed estimator provably attains the performance of
its centralized PCP counterpart, which has access to all networkwide data.
Computer simulations and tests with real load curve data corroborate the
convergence and effectiveness of the novel D-PCP algorithm.Comment: 8 figures, submitted to IEEE Transactions on Smart Grid - Special
issue on "Optimization methods and algorithms applied to smart grid
- …