173 research outputs found
Non-uniform spline recovery from small degree polynomial approximation
We investigate the sparse spikes deconvolution problem onto spaces of
algebraic polynomials. Our framework encompasses the measure reconstruction
problem from a combination of noiseless and noisy moment measurements. We study
a TV-norm regularization procedure to localize the support and estimate the
weights of a target discrete measure in this frame. Furthermore, we derive
quantitative bounds on the support recovery and the amplitudes errors under a
Chebyshev-type minimal separation condition on its support. Incidentally, we
study the localization of the knots of non-uniform splines when a Gaussian
perturbation of their inner-products with a known polynomial basis is observed
(i.e. a small degree polynomial approximation is known) and the boundary
conditions are known. We prove that the knots can be recovered in a grid-free
manner using semidefinite programming
Monotonicity preserving approximation of multivariate scattered data
This paper describes a new method of monotone interpolation and smoothing of multivariate scattered data. It is based on the assumption that the function to be approximated is Lipschitz continuous. The method provides the optimal approximation in the worst case scenario and tight error bounds. Smoothing of noisy data subject to monotonicity constraints is converted into a quadratic programming problem. Estimation of the unknown Lipschitz constant from the data by sample splitting and cross-validation is described. Extension of the method for locally Lipschitz functions is presented.<br /
LASSO ISOtone for High Dimensional Additive Isotonic Regression
Additive isotonic regression attempts to determine the relationship between a
multi-dimensional observation variable and a response, under the constraint
that the estimate is the additive sum of univariate component effects that are
monotonically increasing. In this article, we present a new method for such
regression called LASSO Isotone (LISO). LISO adapts ideas from sparse linear
modelling to additive isotonic regression. Thus, it is viable in many
situations with high dimensional predictor variables, where selection of
significant versus insignificant variables are required. We suggest an
algorithm involving a modification of the backfitting algorithm CPAV. We give a
numerical convergence result, and finally examine some of its properties
through simulations. We also suggest some possible extensions that improve
performance, and allow calculation to be carried out when the direction of the
monotonicity is unknown
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
We consider continuous-time sparse stochastic processes from which we have
only a finite number of noisy/noiseless samples. Our goal is to estimate the
noiseless samples (denoising) and the signal in-between (interpolation
problem).
By relying on tools from the theory of splines, we derive the joint a priori
distribution of the samples and show how this probability density function can
be factorized. The factorization enables us to tractably implement the maximum
a posteriori and minimum mean-square error (MMSE) criteria as two statistical
approaches for estimating the unknowns. We compare the derived statistical
methods with well-known techniques for the recovery of sparse signals, such as
the norm and Log (- relaxation) regularization
methods. The simulation results show that, under certain conditions, the
performance of the regularization techniques can be very close to that of the
MMSE estimator.Comment: To appear in IEEE TS
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Convex-Set–Constrained Sparse Signal Recovery: Theory and Applications
Convex-set constrained sparse signal reconstruction facilitates flexible measurement model and accurate recovery. The objective function that we wish to minimize is a sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL. Signal sparsity is imposed using the l1-norm penalty on the signal\u27s linear transform coefficients.
First, we present a projected Nesterov’s proximal-gradient (PNPG) approach that employs a projected Nesterov\u27s acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. We present an integrated derivation of the momentum acceleration and proofs of O(k^(-2)) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG is largely application independent. Tomographic and compressed-sensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.
We then address the problem of upper-bounding the regularization constant for the convex-set--constrained sparse signal recovery problem behind the PNPG framework. This bound defines the maximum influence the regularization term has to the signal recovery. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized and develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match.
Finally, we show application of the PNPG framework to X-ray computed tomography (CT) and outline a method for sparse image reconstruction from Poisson-distributed polychromatic X-ray CT measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. We apply a block coordinate-descent algorithm that alternates between an NPG image reconstruction step and a limited-memory BFGS with box constraints (L-BFGS-B) iteration for updating mass-attenuation spectrum parameters. Our NPG-BFGS algorithm is the first physical-model based image reconstruction method for simultaneous blind sparse image reconstruction and mass-attenuation spectrum estimation from polychromatic measurements. Real X-ray CT reconstruction examples demonstrate the performance of the proposed blind scheme
Algorithms for Learning Sparse Additive Models with Interactions in High Dimensions
A function is a Sparse Additive
Model (SPAM), if it is of the form where , . Assuming 's, to be unknown, there exists extensive work
for estimating from its samples. In this work, we consider a generalized
version of SPAMs, that also allows for the presence of a sparse number of
second order interaction terms. For some , with , the function is now assumed to be of the form:
. Assuming we have the
freedom to query anywhere in its domain, we derive efficient algorithms
that provably recover with finite sample bounds.
Our analysis covers the noiseless setting where exact samples of are
obtained, and also extends to the noisy setting where the queries are corrupted
with noise. For the noisy setting in particular, we consider two noise models
namely: i.i.d Gaussian noise and arbitrary but bounded noise. Our main methods
for identification of essentially rely on estimation of sparse
Hessian matrices, for which we provide two novel compressed sensing based
schemes. Once are known, we show how the
individual components , can be estimated via
additional queries of , with uniform error bounds. Lastly, we provide
simulation results on synthetic data that validate our theoretical findings.Comment: To appear in Information and Inference: A Journal of the IMA. Made
following changes after review process: (a) Corrected typos throughout the
text. (b) Corrected choice of sampling distribution in Section 5, see eqs.
(5.2), (5.3). (c) More detailed comparison with existing work in Section 8.
(d) Added Section B in appendix on roots of cubic equatio
- …