218 research outputs found
Iterative Log Thresholding
Sparse reconstruction approaches using the re-weighted l1-penalty have been
shown, both empirically and theoretically, to provide a significant improvement
in recovering sparse signals in comparison to the l1-relaxation. However,
numerical optimization of such penalties involves solving problems with
l1-norms in the objective many times. Using the direct link of reweighted
l1-penalties to the concave log-regularizer for sparsity, we derive a simple
prox-like algorithm for the log-regularized formulation. The proximal splitting
step of the algorithm has a closed form solution, and we call the algorithm
'log-thresholding' in analogy to soft thresholding for the l1-penalty.
We establish convergence results, and demonstrate that log-thresholding
provides more accurate sparse reconstructions compared to both soft and hard
thresholding. Furthermore, the approach can be directly extended to
optimization over matrices with penalty for rank (i.e. the nuclear norm penalty
and its re-weigthed version), where we suggest a singular-value
log-thresholding approach.Comment: 5 pages, 4 figure
Sequential Compressed Sensing
Compressed sensing allows perfect recovery of sparse signals (or signals
sparse in some basis) using only a small number of random measurements.
Existing results in compressed sensing literature have focused on
characterizing the achievable performance by bounding the number of samples
required for a given level of signal sparsity. However, using these bounds to
minimize the number of samples requires a-priori knowledge of the sparsity of
the unknown signal, or the decay structure for near-sparse signals.
Furthermore, there are some popular recovery methods for which no such bounds
are known.
In this paper, we investigate an alternative scenario where observations are
available in sequence. For any recovery method, this means that there is now a
sequence of candidate reconstructions. We propose a method to estimate the
reconstruction error directly from the samples themselves, for every candidate
in this sequence. This estimate is universal in the sense that it is based only
on the measurement ensemble, and not on the recovery method or any assumed
level of sparsity of the unknown signal. With these estimates, one can now stop
observations as soon as there is reasonable certainty of either exact or
sufficiently accurate reconstruction. They also provide a way to obtain
"run-time" guarantees for recovery methods that otherwise lack a-priori
performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli)
random measurement ensembles, both for exactly sparse and general near-sparse
signals, and with both noisy and noiseless measurements.Comment: to appear in IEEE transactions on Special Topics in Signal Processin
Lagrangian Relaxation for MAP Estimation in Graphical Models
We develop a general framework for MAP estimation in discrete and Gaussian
graphical models using Lagrangian relaxation techniques. The key idea is to
reformulate an intractable estimation problem as one defined on a more
tractable graph, but subject to additional constraints. Relaxing these
constraints gives a tractable dual problem, one defined by a thin graph, which
is then optimized by an iterative procedure. When this iterative optimization
leads to a consistent estimate, one which also satisfies the constraints, then
it corresponds to an optimal MAP estimate of the original model. Otherwise
there is a ``duality gap'', and we obtain a bound on the optimal solution.
Thus, our approach combines convex optimization with dynamic programming
techniques applicable for thin graphs. The popular tree-reweighted max-product
(TRMP) method may be seen as solving a particular class of such relaxations,
where the intractable graph is relaxed to a set of spanning trees. We also
consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g.
loops), and a connected tree obtained by ``unwinding'' cycles. In addition, we
propose a new class of multiscale relaxations that introduce ``summary''
variables. The potential benefits of such generalizations include: reducing or
eliminating the ``duality gap'' in hard problems, reducing the number or
Lagrange multipliers in the dual problem, and accelerating convergence of the
iterative optimization procedure.Comment: 10 pages, presented at 45th Allerton conference on communication,
control and computing, to appear in proceeding
Convex Total Least Squares
We study the total least squares (TLS) problem that generalizes least squares regression by allowing measurement errors in both dependent and independent variables. TLS is widely used in applied fields including computer vision, system identification and econometrics. The special case when all dependent and independent variables have the same level of uncorrelated Gaussian noise, known as ordinary TLS, can be solved by singular value decomposition (SVD). However, SVD cannot solve many important practical TLS problems with realistic noise structure, such as having varying measurement noise, known structure on the errors, or large outliers requiring robust error-norms. To solve such problems, we develop convex relaxation approaches for a general class of structured TLS (STLS). We show both theoretically and experimentally, that while the plain nuclear norm relaxation incurs large approximation errors for STLS, the re-weighted nuclear norm approach is very effective, and achieves better accuracy on challenging STLS problems than popular non-convex solvers. We describe a fast solution based on augmented Lagrangian formulation, and apply our approach to an important class of biological problems that use population average measurements to infer cell-type and physiological-state specific expression levels that are very hard to measure directly
A Statistical Interpretation of the Maximum Subarray Problem
Maximum subarray is a classical problem in computer science that given an
array of numbers aims to find a contiguous subarray with the largest sum. We
focus on its use for a noisy statistical problem of localizing an interval with
a mean different from background. While a naive application of maximum subarray
fails at this task, both a penalized and a constrained version can succeed. We
show that the penalized version can be derived for common exponential family
distributions, in a manner similar to the change-point detection literature,
and we interpret the resulting optimal penalty value. The failure of the naive
formulation is then explained by an analysis of the estimated interval
boundaries. Experiments further quantify the effect of deviating from the
optimal penalty. We also relate the penalized and constrained formulations and
show that the solutions to the former lie on the convex hull of the solutions
to the latter.Comment: 2023 IEEE International Conference on Acoustics, Speech, and Signal
Processing. 5 pages, 7 figure
A Multichannel Spatial Compressed Sensing Approach for Direction of Arrival Estimation
The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-642-15995-4_57ESPRC Leadership Fellowship EP/G007144/1EPSRC Platform Grant EP/045235/1EU FET-Open Project FP7-ICT-225913\"SMALL
- …