1,646 research outputs found
Generalized Sparse and Low-Rank Optimization for Ultra-Dense Networks
Ultra-dense network (UDN) is a promising technology to further evolve
wireless networks and meet the diverse performance requirements of 5G networks.
With abundant access points, each with communication, computation and storage
resources, UDN brings unprecedented benefits, including significant improvement
in network spectral efficiency and energy efficiency, greatly reduced latency
to enable novel mobile applications, and the capability of providing massive
access for Internet of Things (IoT) devices. However, such great promises come
with formidable research challenges. To design and operate such complex
networks with various types of resources, efficient and innovative
methodologies will be needed. This motivates the recent introduction of highly
structured and generalizable models for network optimization. In this article,
we present some recently proposed large-scale sparse and low-rank frameworks
for optimizing UDNs, supported by various motivating applications. A special
attention is paid on algorithmic approaches to deal with nonconvex objective
functions and constraints, as well as computational scalability.Comment: This paper has been accepted by IEEE Communication Magazine, Special
Issue on Heterogeneous Ultra Dense Network
Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation
We present a unified framework for low-rank matrix estimation with nonconvex
penalties. We first prove that the proposed estimator attains a faster
statistical rate than the traditional low-rank matrix estimator with nuclear
norm penalty. Moreover, we rigorously show that under a certain condition on
the magnitude of the nonzero singular values, the proposed estimator enjoys
oracle property (i.e., exactly recovers the true rank of the matrix), besides
attaining a faster rate. As far as we know, this is the first work that
establishes the theory of low-rank matrix estimation with nonconvex penalties,
confirming the advantages of nonconvex penalties for matrix completion.
Numerical experiments on both synthetic and real world datasets corroborate our
theory.Comment: 29 pages, 1 figure, 2 table
Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation
Low-rank modeling plays a pivotal role in signal processing and machine
learning, with applications ranging from collaborative filtering, video
surveillance, medical imaging, to dimensionality reduction and adaptive
filtering. Many modern high-dimensional data and interactions thereof can be
modeled as lying approximately in a low-dimensional subspace or manifold,
possibly with additional structures, and its proper exploitations lead to
significant reduction of costs in sensing, computation and storage. In recent
years, there is a plethora of progress in understanding how to exploit low-rank
structures using computationally efficient procedures in a provable manner,
including both convex and nonconvex approaches. On one side, convex relaxations
such as nuclear norm minimization often lead to statistically optimal
procedures for estimating low-rank matrices, where first-order methods are
developed to address the computational challenges; on the other side, there is
emerging evidence that properly designed nonconvex procedures, such as
projected gradient descent, often provide globally optimal solutions with a
much lower computational cost in many problems. This survey article will
provide a unified overview of these recent advances on low-rank matrix
estimation from incomplete measurements. Attention is paid to rigorous
characterization of the performance of these algorithms, and to problems where
the low-rank matrix have additional structural properties that require new
algorithmic designs and theoretical analysis.Comment: To appear in IEEE Signal Processing Magazin
Estimating Differential Latent Variable Graphical Models with Applications to Brain Connectivity
Differential graphical models are designed to represent the difference
between the conditional dependence structures of two groups, thus are of
particular interest for scientific investigation. Motivated by modern
applications, this manuscript considers an extended setting where each group is
generated by a latent variable Gaussian graphical model. Due to the existence
of latent factors, the differential network is decomposed into sparse and
low-rank components, both of which are symmetric indefinite matrices. We
estimate these two components simultaneously using a two-stage procedure: (i)
an initialization stage, which computes a simple, consistent estimator, and
(ii) a convergence stage, implemented using a projected alternating gradient
descent algorithm applied to a nonconvex objective, initialized using the
output of the first stage. We prove that given the initialization, the
estimator converges linearly with a nontrivial, minimax optimal statistical
error. Experiments on synthetic and real data illustrate that the proposed
nonconvex procedure outperforms existing methods.Comment: 60 page
Model-free Nonconvex Matrix Completion: Local Minima Analysis and Applications in Memory-efficient Kernel PCA
This work studies low-rank approximation of a positive semidefinite matrix
from partial entries via nonconvex optimization. We characterized how well
local-minimum based low-rank factorization approximates a fixed positive
semidefinite matrix without any assumptions on the rank-matching, the condition
number or eigenspace incoherence parameter. Furthermore, under certain
assumptions on rank-matching and well-boundedness of condition numbers and
eigenspace incoherence parameters, a corollary of our main theorem improves the
state-of-the-art sampling rate results for nonconvex matrix completion with no
spurious local minima in Ge et al. [2016, 2017]. In addition, we investigated
when the proposed nonconvex optimization results in accurate low-rank
approximations even in presence of large condition numbers, large incoherence
parameters, or rank mismatching. We also propose to apply the nonconvex
optimization to memory-efficient Kernel PCA. Compared to the well-known
Nystr\"{o}m methods, numerical experiments indicate that the proposed nonconvex
optimization approach yields more stable results in both low-rank approximation
and clustering.Comment: Main theorem improve
Low-Rank Modeling and Its Applications in Image Analysis
Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.Comment: To appear in ACM Computing Survey
Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset
Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 201
SOFAR: large-scale association network learning
Many modern big data applications feature large scale in both numbers of
responses and predictors. Better statistical efficiency and scientific insights
can be enabled by understanding the large-scale response-predictor association
network structures via layers of sparse latent factors ranked by importance.
Yet sparsity and orthogonality have been two largely incompatible goals. To
accommodate both features, in this paper we suggest the method of sparse
orthogonal factor regression (SOFAR) via the sparse singular value
decomposition with orthogonality constrained optimization to learn the
underlying association networks, with broad applications to both unsupervised
and supervised learning tasks such as biclustering with sparse singular value
decomposition, sparse principal component analysis, sparse factor analysis, and
spare vector autoregression analysis. Exploiting the framework of
convexity-assisted nonconvex optimization, we derive nonasymptotic error bounds
for the suggested procedure characterizing the theoretical advantages. The
statistical guarantees are powered by an efficient SOFAR algorithm with
convergence property. Both computational and theoretical advantages of our
procedure are demonstrated with several simulation and real data examples
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
This paper studies noisy low-rank matrix completion: given partial and noisy
entries of a large low-rank matrix, the goal is to estimate the underlying
matrix faithfully and efficiently. Arguably one of the most popular paradigms
to tackle this problem is convex relaxation, which achieves remarkable efficacy
in practice. However, the theoretical support of this approach is still far
from optimal in the noisy setting, falling short of explaining its empirical
success.
We make progress towards demystifying the practical efficacy of convex
relaxation vis-\`a-vis random noise. When the rank and the condition number of
the unknown matrix are bounded by a constant, we demonstrate that the convex
programming approach achieves near-optimal estimation errors --- in terms of
the Euclidean loss, the entrywise loss, and the spectral norm loss --- for a
wide range of noise levels. All of this is enabled by bridging convex
relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct
algorithmic paradigm that is provably robust against noise. More specifically,
we show that an approximate critical point of the nonconvex formulation serves
as an extremely tight approximation of the convex solution, thus allowing us to
transfer the desired statistical guarantees of the nonconvex approach to its
convex counterpart
Computation of the Maximum Likelihood estimator in low-rank Factor Analysis
Factor analysis, a classical multivariate statistical technique is popularly
used as a fundamental tool for dimensionality reduction in statistics,
econometrics and data science. Estimation is often carried out via the Maximum
Likelihood (ML) principle, which seeks to maximize the likelihood under the
assumption that the positive definite covariance matrix can be decomposed as
the sum of a low rank positive semidefinite matrix and a diagonal matrix with
nonnegative entries. This leads to a challenging rank constrained nonconvex
optimization problem. We reformulate the low rank ML Factor Analysis problem as
a nonlinear nonsmooth semidefinite optimization problem, study various
structural properties of this reformulation and propose fast and scalable
algorithms based on difference of convex (DC) optimization. Our approach has
computational guarantees, gracefully scales to large problems, is applicable to
situations where the sample covariance matrix is rank deficient and adapts to
variants of the ML problem with additional constraints on the problem
parameters. Our numerical experiments demonstrate the significant usefulness of
our approach over existing state-of-the-art approaches.Comment: 22 pages, 4 figure
- …