3,946 research outputs found
Image Reconstruction in Optical Interferometry
This tutorial paper describes the problem of image reconstruction from
interferometric data with a particular focus on the specific problems
encountered at optical (visible/IR) wavelengths. The challenging issues in
image reconstruction from interferometric data are introduced in the general
framework of inverse problem approach. This framework is then used to describe
existing image reconstruction algorithms in radio interferometry and the new
methods specifically developed for optical interferometry.Comment: accepted for publication in IEEE Signal Processing Magazin
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Performance bounds for expander-based compressed sensing in Poisson noise
This paper provides performance bounds for compressed sensing in the presence
of Poisson noise using expander graphs. The Poisson noise model is appropriate
for a variety of applications, including low-light imaging and digital
streaming, where the signal-independent and/or bounded noise models used in the
compressed sensing literature are no longer applicable. In this paper, we
develop a novel sensing paradigm based on expander graphs and propose a MAP
algorithm for recovering sparse or compressible signals from Poisson
observations. The geometry of the expander graphs and the positivity of the
corresponding sensing matrices play a crucial role in establishing the bounds
on the signal reconstruction error of the proposed algorithm. We support our
results with experimental demonstrations of reconstructing average packet
arrival rates and instantaneous packet counts at a router in a communication
network, where the arrivals of packets in each flow follow a Poisson process.Comment: revised version; accepted to IEEE Transactions on Signal Processin
Rapid deconvolution of low-resolution time-of-flight data using Bayesian inference
The deconvolution of low-resolution time-of-flight data has numerous advantages, including the ability to extract additional information from the experimental data. We augment the well-known Lucy-Richardson deconvolution algorithm using various Bayesian prior distributions and show that a prior of second-differences of the signal outperforms the standard Lucy-Richardson algorithm, accelerating the rate of convergence by more than a factor of four, while preserving the peak amplitude ratios of a similar fraction of the total peaks. A novel stopping criterion and boosting mechanism are implemented to ensure that these methods converge to a similar final entropy and local minima are avoided. Improvement by a factor of two in mass resolution allows more accurate quantification of the spectra. The general method is demonstrated in this paper through the deconvolution of fragmentation peaks of the 2,5-dihydroxybenzoic acid matrix and the benzyltriphenylphosphonium thermometer ion, following femtosecond ultraviolet laser desorption
Nearly optimal minimax estimator for high-dimensional sparse linear regression
We present estimators for a well studied statistical estimation problem: the
estimation for the linear regression model with soft sparsity constraints
( constraint with ) in the high-dimensional setting. We first
present a family of estimators, called the projected nearest neighbor estimator
and show, by using results from Convex Geometry, that such estimator is within
a logarithmic factor of the optimal for any design matrix. Then by utilizing a
semi-definite programming relaxation technique developed in [SIAM J. Comput. 36
(2007) 1764-1776], we obtain an approximation algorithm for computing the
minimax risk for any such estimation task and also a polynomial time nearly
optimal estimator for the important case of sparsity constraint. Such
results were only known before for special cases, despite decades of studies on
this problem. We also extend the method to the adaptive case when the parameter
radius is unknown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1141 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A sparsity-driven approach for joint SAR imaging and phase error correction
Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm each iteration of which
consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements it provides over existing techniques for model error compensation in SAR
Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing
In hyperspectral images, some spectral bands suffer from low signal-to-noise
ratio due to noisy acquisition and atmospheric effects, thus requiring robust
techniques for the unmixing problem. This paper presents a robust supervised
spectral unmixing approach for hyperspectral images. The robustness is achieved
by writing the unmixing problem as the maximization of the correntropy
criterion subject to the most commonly used constraints. Two unmixing problems
are derived: the first problem considers the fully-constrained unmixing, with
both the non-negativity and sum-to-one constraints, while the second one deals
with the non-negativity and the sparsity-promoting of the abundances. The
corresponding optimization problems are solved efficiently using an alternating
direction method of multipliers (ADMM) approach. Experiments on synthetic and
real hyperspectral images validate the performance of the proposed algorithms
for different scenarios, demonstrating that the correntropy-based unmixing is
robust to outlier bands.Comment: 23 page
- …