16,328 research outputs found
An iterative algorithm for sparse and constrained recovery with applications to divergence-free current reconstructions in magneto-encephalography
We propose an iterative algorithm for the minimization of a -norm
penalized least squares functional, under additional linear constraints. The
algorithm is fully explicit: it uses only matrix multiplications with the three
matrices present in the problem (in the linear constraint, in the data misfit
part and in penalty term of the functional). None of the three matrices must be
invertible. Convergence is proven in a finite-dimensional setting. We apply the
algorithm to a synthetic problem in magneto-encephalography where it is used
for the reconstruction of divergence-free current densities subject to a
sparsity promoting penalty on the wavelet coefficients of the current
densities. We discuss the effects of imposing zero divergence and of imposing
joint sparsity (of the vector components of the current density) on the current
density reconstruction.Comment: 21 pages, 3 figure
Linear inverse problems with noise: primal and primal-dual splitting
In this paper, we propose two algorithms for solving linear inverse problems
when the observations are corrupted by noise. A proper data fidelity term
(log-likelihood) is introduced to reflect the statistics of the noise (e.g.
Gaussian, Poisson). On the other hand, as a prior, the images to restore are
assumed to be positive and sparsely represented in a dictionary of waveforms.
Piecing together the data fidelity and the prior terms, the solution to the
inverse problem is cast as the minimization of a non-smooth convex functional.
We establish the well-posedness of the optimization problem, characterize the
corresponding minimizers, and solve it by means of primal and primal-dual
proximal splitting algorithms originating from the field of non-smooth convex
optimization theory. Experimental results on deconvolution, inpainting and
denoising with some comparison to prior methods are also reported
Flexible Multi-layer Sparse Approximations of Matrices and Applications
The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
- …