2,446 research outputs found
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS
Learning incoherent dictionaries for sparse approximation using iterative projections and rotations
This work was supported by the Queen Mary University of London School Studentship, the EU FET-Open project FP7-
ICT-225913-SMALL. Sparse Models, Algorithms and Learning for Large-scale data and a Leadership Fellowship from the UK
Engineering and Physical Sciences Research Council (EPSRC)
Learning Dictionaries with Bounded Self-Coherence
Sparse coding in learned dictionaries has been established as a successful
approach for signal denoising, source separation and solving inverse problems
in general. A dictionary learning method adapts an initial dictionary to a
particular signal class by iteratively computing an approximate factorization
of a training data matrix into a dictionary and a sparse coding matrix. The
learned dictionary is characterized by two properties: the coherence of the
dictionary to observations of the signal class, and the self-coherence of the
dictionary atoms. A high coherence to the signal class enables the sparse
coding of signal observations with a small approximation error, while a low
self-coherence of the atoms guarantees atom recovery and a more rapid residual
error decay rate for the sparse coding algorithm. The two goals of high signal
coherence and low self-coherence are typically in conflict, therefore one seeks
a trade-off between them, depending on the application. We present a dictionary
learning method with an effective control over the self-coherence of the
trained dictionary, enabling a trade-off between maximizing the sparsity of
codings and approximating an equiangular tight frame.Comment: 4 pages, 2 figures; IEEE Signal Processing Letters, vol. 19, no. 12,
201
Slepian functions and their use in signal estimation and spectral analysis
It is a well-known fact that mathematical functions that are timelimited (or
spacelimited) cannot be simultaneously bandlimited (in frequency). Yet the
finite precision of measurement and computation unavoidably bandlimits our
observation and modeling scientific data, and we often only have access to, or
are only interested in, a study area that is temporally or spatially bounded.
In the geosciences we may be interested in spectrally modeling a time series
defined only on a certain interval, or we may want to characterize a specific
geographical area observed using an effectively bandlimited measurement device.
It is clear that analyzing and representing scientific data of this kind will
be facilitated if a basis of functions can be found that are "spatiospectrally"
concentrated, i.e. "localized" in both domains at the same time. Here, we give
a theoretical overview of one particular approach to this "concentration"
problem, as originally proposed for time series by Slepian and coworkers, in
the 1960s. We show how this framework leads to practical algorithms and
statistically performant methods for the analysis of signals and their power
spectra in one and two dimensions, and on the surface of a sphere.Comment: Submitted to the Handbook of Geomathematics, edited by Willi Freeden,
Zuhair M. Nashed and Thomas Sonar, and to be published by Springer Verla
- …