26 research outputs found
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS
Analysis Operator Learning and Its Application to Image Reconstruction
Exploiting a priori known structural information lies at the core of many
image reconstruction methods that can be stated as inverse problems. The
synthesis model, which assumes that images can be decomposed into a linear
combination of very few atoms of some dictionary, is now a well established
tool for the design of image reconstruction algorithms. An interesting
alternative is the analysis model, where the signal is multiplied by an
analysis operator and the outcome is assumed to be the sparse. This approach
has only recently gained increasing interest. The quality of reconstruction
methods based on an analysis model severely depends on the right choice of the
suitable operator.
In this work, we present an algorithm for learning an analysis operator from
training images. Our method is based on an -norm minimization on the
set of full rank matrices with normalized columns. We carefully introduce the
employed conjugate gradient method on manifolds, and explain the underlying
geometry of the constraints. Moreover, we compare our approach to
state-of-the-art methods for image denoising, inpainting, and single image
super-resolution. Our numerical results show competitive performance of our
general approach in all presented applications compared to the specialized
state-of-the-art techniques.Comment: 12 pages, 7 figure
Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems
This paper focuses on characterizing the fundamental performance limits that
can be expected from an ideal decoder given a general model, ie, a general
subset of "simple" vectors of interest. First, we extend the so-called notion
of instance optimality of a decoder to settings where one only wishes to
reconstruct some part of the original high dimensional vector from a
low-dimensional observation. This covers practical settings such as medical
imaging of a region of interest, or audio source separation when one is only
interested in estimating the contribution of a specific instrument to a musical
recording. We define instance optimality relatively to a model much beyond the
traditional framework of sparse recovery, and characterize the existence of an
instance optimal decoder in terms of joint properties of the model and the
considered linear operator. Noiseless and noise-robust settings are both
considered. We show somewhat surprisingly that the existence of noise-aware
instance optimal decoders for all noise levels implies the existence of a
noise-blind decoder. A consequence of our results is that for models that are
rich enough to contain an orthonormal basis, the existence of an L2/L2 instance
optimal decoder is only possible when the linear operator is not substantially
dimension-reducing. This covers well-known cases (sparse vectors, low-rank
matrices) as well as a number of seemingly new situations (structured sparsity
and sparse inverse covariance matrices for instance). We exhibit an
operator-dependent norm which, under a model-specific generalization of the
Restricted Isometry Property (RIP), always yields a feasible instance
optimality property. This norm can be upper bounded by an atomic norm relative
to the considered model.Comment: To appear in IEEE Transactions on Information Theor
An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms