8 research outputs found
Sampling in the Analysis Transform Domain
Many signal and image processing applications have benefited remarkably from
the fact that the underlying signals reside in a low dimensional subspace. One
of the main models for such a low dimensionality is the sparsity one. Within
this framework there are two main options for the sparse modeling: the
synthesis and the analysis ones, where the first is considered the standard
paradigm for which much more research has been dedicated. In it the signals are
assumed to have a sparse representation under a given dictionary. On the other
hand, in the analysis approach the sparsity is measured in the coefficients of
the signal after applying a certain transformation, the analysis dictionary, on
it. Though several algorithms with some theory have been developed for this
framework, they are outnumbered by the ones proposed for the synthesis
methodology.
Given that the analysis dictionary is either a frame or the two dimensional
finite difference operator, we propose a new sampling scheme for signals from
the analysis model that allows to recover them from their samples using any
existing algorithm from the synthesis model. The advantage of this new sampling
strategy is that it makes the existing synthesis methods with their theory also
available for signals from the analysis framework.Comment: 13 Pages, 2 figure
Analysis of general weights in weighted ℓ1−2 minimization through applications
The weighted ℓ1−2 minimization has recently attracted some attention due to its capability to deal with highly coherent matrices. Notwithstanding the availability of its stable recovery guarantees, there appear to be some issues not addressed in the literature, which are (i). convergence of the solver for the weighted ℓ1−2 minimization analytically, and (ii). detailed analysis of relevance of general weights to applications. While establishing the convergence of the solver of the weighted ℓ1−2 minimization, we demonstrate the significance of general weights, w∈(0,1), empirically through some applications, including the reconstruction of magnetic resonance images. In particular, we show that the general weights attain significance when we do not have fully accurate or fully corrupt information about the support of the signal to be reconstructed from its linear measurements. We conclude the work by discussing a numerical scheme that chooses the partial support and the weights iteratively
Choose your path wisely: gradient descent in a Bregman distance framework
We propose an extension of a special form of gradient descent --- in the
literature known as linearised Bregman iteration -- to a larger class of
non-convex functions. We replace the classical (squared) two norm metric in the
gradient descent setting with a generalised Bregman distance, based on a
proper, convex and lower semi-continuous function. The algorithm's global
convergence is proven for functions that satisfy the Kurdyka-\L ojasiewicz
property. Examples illustrate that features of different scale are being
introduced throughout the iteration, transitioning from coarse to fine. This
coarse-to-fine approach with respect to scale allows to recover solutions of
non-convex optimisation problems that are superior to those obtained with
conventional gradient descent, or even projected and proximal gradient descent.
The effectiveness of the linearised Bregman iteration in combination with early
stopping is illustrated for the applications of parallel magnetic resonance
imaging, blind deconvolution as well as image classification with neural
networks
Sharp Time--Data Tradeoffs for Linear Inverse Problems
In this paper we characterize sharp time-data tradeoffs for optimization
problems used for solving linear inverse problems. We focus on the minimization
of a least-squares objective subject to a constraint defined as the sub-level
set of a penalty function. We present a unified convergence analysis of the
gradient projection algorithm applied to such problems. We sharply characterize
the convergence rate associated with a wide variety of random measurement
ensembles in terms of the number of measurements and structural complexity of
the signal with respect to the chosen penalty function. The results apply to
both convex and nonconvex constraints, demonstrating that a linear convergence
rate is attainable even though the least squares objective is not strongly
convex in these settings. When specialized to Gaussian measurements our results
show that such linear convergence occurs when the number of measurements is
merely 4 times the minimal number required to recover the desired signal at all
(a.k.a. the phase transition). We also achieve a slower but geometric rate of
convergence precisely above the phase transition point. Extensive numerical
results suggest that the derived rates exactly match the empirical performance
Generalized averaged Gaussian quadrature and applications
A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications
Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described