167 research outputs found
Group sparse optimization via regularization
In this paper, we investigate a group sparse optimization problem via
regularization in three aspects: theory, algorithm and
application. In the theoretical aspect, by introducing a notion of group
restricted eigenvalue condition, we establish some oracle property and a global
recovery bound of order for any point in a level set
of the regularization problem, and by virtue of modern variational
analysis techniques, we also provide a local analysis of recovery bound of
order for a path of local minima. In the algorithmic aspect, we
apply the well-known proximal gradient method to solve the
regularization problems, either by analytically solving some specific
regularization subproblems, or by using the Newton method to solve
general regularization subproblems. In particular, we establish
the linear convergence rate of the proximal gradient method for solving the
regularization problem under some mild conditions. As a
consequence, the linear convergence rate of proximal gradient method for
solving the usual regularization problem () is obtained.
Finally in the aspect of application, we present some numerical results on both
the simulated data and the real data in gene transcriptional regulation.Comment: 48 pages, 7 figure
Compressive sensing: a paradigm shift in signal processing
We survey a new paradigm in signal processing known as "compressive sensing".
Contrary to old practices of data acquisition and reconstruction based on the
Shannon-Nyquist sampling principle, the new theory shows that it is possible to
reconstruct images or signals of scientific interest accurately and even
exactly from a number of samples which is far smaller than the desired
resolution of the image/signal, e.g., the number of pixels in the image. This
new technique draws from results in several fields of mathematics, including
algebra, optimization, probability theory, and harmonic analysis. We will
discuss some of the key mathematical ideas behind compressive sensing, as well
as its implications to other fields: numerical analysis, information theory,
theoretical computer science, and engineering.Comment: A short survey of compressive sensin
Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
Interpolation-based trust-region methods are an important class of algorithms
for Derivative-Free Optimization which rely on locally approximating an
objective function by quadratic polynomial interpolation models, frequently
built from less points than there are basis components. Often, in practical
applications, the contribution of the problem variables to the objective
function is such that many pairwise correlations between variables are
negligible, implying, in the smooth case, a sparse structure in the Hessian
matrix. To be able to exploit Hessian sparsity, existing optimization
approaches require the knowledge of the sparsity structure. The goal of this
paper is to develop and analyze a method where the sparse models are
constructed automatically. The sparse recovery theory developed recently in the
field of compressed sensing characterizes conditions under which a sparse
vector can be accurately recovered from few random measurements. Such a
recovery is achieved by minimizing the l1-norm of a vector subject to the
measurements constraints. We suggest an approach for building sparse quadratic
polynomial interpolation models by minimizing the l1-norm of the entries of the
model Hessian subject to the interpolation conditions. We show that this
procedure recovers accurate models when the function Hessian is sparse, using
relatively few randomly selected sample points. Motivated by this result, we
developed a practical interpolation-based trust-region method using
deterministic sample sets and minimum l1-norm quadratic models. Our
computational results show that the new approach exhibits a promising numerical
performance both in the general case and in the sparse one
On the gap between RIP-properties and sparse recovery conditions
We consider the problem of recovering sparse vectors from underdetermined
linear measurements via -constrained basis pursuit. Previous analyses
of this problem based on generalized restricted isometry properties have
suggested that two phenomena occur if . First, one may need
substantially more than measurements (optimal for ) for
uniform recovery of all -sparse vectors. Second, the matrix that achieves
recovery with the optimal number of measurements may not be Gaussian (as for
). We present a new, direct analysis which shows that in fact neither of
these phenomena occur. Via a suitable version of the null space property we
show that a standard Gaussian matrix provides -recovery
guarantees for -constrained basis pursuit in the optimal measurement
regime. Our result extends to several heavier-tailed measurement matrices. As
an application, we show that one can obtain a consistent reconstruction from
uniform scalar quantized measurements in the optimal measurement regime
Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization
The Schatten-p quasi-norm is usually used to replace the standard
nuclear norm in order to approximate the rank function more accurately.
However, existing Schatten-p quasi-norm minimization algorithms involve
singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each
iteration, and thus may become very slow and impractical for large-scale
problems. In this paper, we first define two tractable Schatten quasi-norms,
i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove
that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively,
which lead to the design of very efficient algorithms that only need to update
two much smaller factor matrices. We also design two efficient proximal
alternating linearized minimization algorithms for solving representative
matrix completion problems. Finally, we provide the global convergence and
performance guarantees for our algorithms, which have better convergence
properties than existing algorithms. Experimental results on synthetic and
real-world data show that our algorithms are more accurate than the
state-of-the-art methods, and are orders of magnitude faster.Comment: 16 pages, 5 figures, Appears in Proceedings of the 30th AAAI
Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, USA, pp.
2016--2022, 201
Nonlinear Residual Minimization by Iteratively Reweighted Least Squares
We address the numerical solution of minimal norm residuals of {\it
nonlinear} equations in finite dimensions. We take inspiration from the problem
of finding a sparse vector solution by using greedy algorithms based on
iterative residual minimizations in the -norm, for .
Due to the mild smoothness of the problem, especially for , we develop
and analyze a generalized version of Iteratively Reweighted Least Squares
(IRLS). This simple and efficient algorithm performs the solution of
optimization problems involving non-quadratic possibly non-convex and
non-smooth cost functions, which can be transformed into a sequence of common
least squares problems, which can be tackled more efficiently.While its
analysis has been developed in many contexts when the model equation is {\it
linear}, no results are provided in the {\it nonlinear} case. We address the
convergence and the rate of error decay of IRLS for nonlinear problems. The
convergence analysis is based on its reformulation as an alternating
minimization of an energy functional, whose variables are the competitors to
solutions of the intermediate reweighted least squares problems. Under specific
conditions of coercivity and local convexity, we are able to show convergence
of IRLS to minimizers of the nonlinear residual problem. For the case where we
are lacking local convexity, we propose an appropriate convexification.. To
illustrate the theoretical results we conclude the paper with several numerical
experiments. We compare IRLS with standard Matlab functions for an easily
presentable example and numerically validate our theoretical results in the
more complicated framework of phase retrieval problems. Finally we examine the
recovery capability of the algorithm in the context of data corrupted by
impulsive noise where the sparsification of the residual is desired.Comment: 37 pages. arXiv admin note: text overlap with arXiv:0807.0575 by
other author
Sparse Recovery of Positive Signals with Minimal Expansion
We investigate the sparse recovery problem of reconstructing a
high-dimensional non-negative sparse vector from lower dimensional linear
measurements. While much work has focused on dense measurement matrices, sparse
measurement schemes are crucial in applications, such as DNA microarrays and
sensor networks, where dense measurements are not practically feasible. One
possible construction uses the adjacency matrices of expander graphs, which
often leads to recovery algorithms much more efficient than
minimization. However, to date, constructions based on expanders have required
very high expansion coefficients which can potentially make the construction of
such graphs difficult and the size of the recoverable sets small.
In this paper, we construct sparse measurement matrices for the recovery of
non-negative vectors, using perturbations of the adjacency matrix of an
expander graph with much smaller expansion coefficient. We present a necessary
and sufficient condition for optimization to successfully recover the
unknown vector and obtain expressions for the recovery threshold. For certain
classes of measurement matrices, this necessary and sufficient condition is
further equivalent to the existence of a "unique" vector in the constraint set,
which opens the door to alternative algorithms to minimization. We
further show that the minimal expansion we use is necessary for any graph for
which sparse recovery is possible and that therefore our construction is tight.
We finally present a novel recovery algorithm that exploits expansion and is
much faster than optimization. Finally, we demonstrate through
theoretical bounds, as well as simulation, that our method is robust to noise
and approximate sparsity.Comment: 25 pages, submitted for publicatio
Robustness to unknown error in sparse regularization
Quadratically-constrained basis pursuit has become a popular device in sparse
regularization; in particular, in the context of compressed sensing. However,
the majority of theoretical error estimates for this regularizer assume an a
priori bound on the noise level, which is usually lacking in practice. In this
paper, we develop stability and robustness estimates which remove this
assumption. First, we introduce an abstract framework and show that robust
instance optimality of any decoder in the noise-aware setting implies stability
and robustness in the noise-blind setting. This is based on certain sup-inf
constants referred to as quotients, strictly related to the quotient property
of compressed sensing. We then apply this theory to prove the robustness of
quadratically-constrained basis pursuit under unknown error in the cases of
random Gaussian matrices and of random matrices with heavy-tailed rows, such as
random sampling matrices from bounded orthonormal systems. We illustrate our
results in several cases of practical importance, including subsampled Fourier
measurements and recovery of sparse polynomial expansions.Comment: To appear in IEEE Transactions on Information Theor
Rank Awareness in Joint Sparse Recovery
In this paper we revisit the sparse multiple measurement vector (MMV) problem
where the aim is to recover a set of jointly sparse multichannel vectors from
incomplete measurements. This problem has received increasing interest as an
extension of the single channel sparse recovery problem which lies at the heart
of the emerging field of compressed sensing. However the sparse approximation
problem has origins which include links to the field of array signal processing
where we find the inspiration for a new family of MMV algorithms based on the
MUSIC algorithm. We highlight the role of the rank of the coefficient matrix X
in determining the difficulty of the recovery problem. We derive the necessary
and sufficient conditions for the uniqueness of the sparse MMV solution, which
indicates that the larger the rank of X the less sparse X needs to be to ensure
uniqueness. We also show that the larger the rank of X the less the
computational effort required to solve the MMV problem through a combinatorial
search. In the second part of the paper we consider practical suboptimal
algorithms for solving the sparse MMV problem. We examine the rank awareness of
popular algorithms such as SOMP and mixed norm minimization techniques and show
them to be rank blind in terms of worst case analysis. We then consider a
family of greedy algorithms that are rank aware. The simplest such algorithm is
a discrete version of MUSIC and is guaranteed to recover the sparse vectors in
the full rank MMV case under mild conditions. We extend this idea to develop a
rank aware pursuit algorithm that naturally reduces to Order Recursive Matching
Pursuit (ORMP) in the single measurement case and also provides guaranteed
recovery in the full rank multi-measurement case. Numerical simulations
demonstrate that the rank aware algorithms are significantly better than
existing algorithms in dealing with multiple measurements.Comment: 23 pages, 2 figure
Performance Analysis of Joint-Sparse Recovery from Multiple Measurements and Prior Information via Convex Optimization
We address the problem of compressed sensing with multiple measurement
vectors associated with prior information in order to better reconstruct an
original sparse matrix signal. minimization is used to
emphasize co-sparsity property and similarity between matrix signal and prior
information. We then derive the necessary and sufficient condition of
successfully reconstructing the original signal and establish the lower and
upper bounds of required measurements such that the condition holds from the
perspective of conic geometry. Our bounds further indicates what prior
information is helpful to improve the the performance of CS. Experimental
results validates the effectiveness of all our findings
- β¦