345 research outputs found
Variational Bayesian algorithm for quantized compressed sensing
Compressed sensing (CS) is on recovery of high dimensional signals from their
low dimensional linear measurements under a sparsity prior and digital
quantization of the measurement data is inevitable in practical implementation
of CS algorithms. In the existing literature, the quantization error is modeled
typically as additive noise and the multi-bit and 1-bit quantized CS problems
are dealt with separately using different treatments and procedures. In this
paper, a novel variational Bayesian inference based CS algorithm is presented,
which unifies the multi- and 1-bit CS processing and is applicable to various
cases of noiseless/noisy environment and unsaturated/saturated quantizer. By
decoupling the quantization error from the measurement noise, the quantization
error is modeled as a random variable and estimated jointly with the signal
being recovered. Such a novel characterization of the quantization error
results in superior performance of the algorithm which is demonstrated by
extensive simulations in comparison with state-of-the-art methods for both
multi-bit and 1-bit CS problems.Comment: Accepted by IEEE Trans. Signal Processing. 10 pages, 6 figure
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
A Generalized Newton Method for Subgradient Systems
This paper proposes and develops a new Newton-type algorithm to solve
subdifferential inclusions defined by subgradients of extended-real-valued
prox-regular functions. The proposed algorithm is formulated in terms of the
second-order subdifferential of such functions that enjoys extensive calculus
rules and can be efficiently computed for broad classes of extended-real-valued
functions. Based on this and on metric regularity and subregularity properties
of subgradient mappings, we establish verifiable conditions ensuring
well-posedness of the proposed algorithm and its local superlinear convergence.
The obtained results are also new for the class of equations defined by
continuously differentiable functions with Lipschitzian derivatives
( functions), which is the underlying case of our
consideration. The developed algorithm for prox-regular functions is formulated
in terms of proximal mappings related to and reduces to Moreau envelopes.
Besides numerous illustrative examples and comparison with known algorithms for
functions and generalized equations, the paper presents
applications of the proposed algorithm to the practically important class of
Lasso problems arising in statistics and machine learning.Comment: 35 page
A Proximal Approach for a Class of Matrix Optimization Problems
In recent years, there has been a growing interest in mathematical models
leading to the minimization, in a symmetric matrix space, of a Bregman
divergence coupled with a regularization term. We address problems of this type
within a general framework where the regularization term is split in two parts,
one being a spectral function while the other is arbitrary. A Douglas-Rachford
approach is proposed to address such problems and a list of proximity operators
is provided allowing us to consider various choices for the fit-to-data
functional and for the regularization term. Numerical experiments show the
validity of this approach for solving convex optimization problems encountered
in the context of sparse covariance matrix estimation. Based on our theoretical
results, an algorithm is also proposed for noisy graphical lasso where a
precision matrix has to be estimated in the presence of noise. The nonconvexity
of the resulting objective function is dealt with a majorization-minimization
approach, i.e. by building a sequence of convex surrogates and solving the
inner optimization subproblems via the aforementioned Douglas-Rachford
procedure. We establish conditions for the convergence of this iterative scheme
and we illustrate its good numerical performance with respect to
state-of-the-art approaches
- …