24,036 research outputs found
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS
The achievable performance of convex demixing
Demixing is the problem of identifying multiple structured signals from a
superimposed, undersampled, and noisy observation. This work analyzes a general
framework, based on convex optimization, for solving demixing problems. When
the constituent signals follow a generic incoherence model, this analysis leads
to precise recovery guarantees. These results admit an attractive
interpretation: each signal possesses an intrinsic degrees-of-freedom
parameter, and demixing can succeed if and only if the dimension of the
observation exceeds the total degrees of freedom present in the observation
Phase Transitions in Sparse PCA
We study optimal estimation for sparse principal component analysis when the
number of non-zero elements is small but on the same order as the dimension of
the data. We employ approximate message passing (AMP) algorithm and its state
evolution to analyze what is the information theoretically minimal mean-squared
error and the one achieved by AMP in the limit of large sizes. For a special
case of rank one and large enough density of non-zeros Deshpande and Montanari
[1] proved that AMP is asymptotically optimal. We show that both for low
density and for large rank the problem undergoes a series of phase transitions
suggesting existence of a region of parameters where estimation is information
theoretically possible, but AMP (and presumably every other polynomial
algorithm) fails. The analysis of the large rank limit is particularly
instructive.Comment: 6 pages, 3 figure
On Verifiable Sufficient Conditions for Sparse Signal Recovery via Minimization
We propose novel necessary and sufficient conditions for a sensing matrix to
be "-good" - to allow for exact -recovery of sparse signals with
nonzero entries when no measurement noise is present. Then we express the error
bounds for imperfect -recovery (nonzero measurement noise, nearly
-sparse signal, near-optimal solution of the optimization problem yielding
the -recovery) in terms of the characteristics underlying these
conditions. Further, we demonstrate (and this is the principal result of the
paper) that these characteristics, although difficult to evaluate, lead to
verifiable sufficient conditions for exact sparse -recovery and to
efficiently computable upper bounds on those for which a given sensing
matrix is -good. We establish also instructive links between our approach
and the basic concepts of the Compressed Sensing theory, like Restricted
Isometry or Restricted Eigenvalue properties
Reliable recovery of hierarchically sparse signals for Gaussian and Kronecker product measurements
We propose and analyze a solution to the problem of recovering a block sparse
signal with sparse blocks from linear measurements. Such problems naturally
emerge inter alia in the context of mobile communication, in order to meet the
scalability and low complexity requirements of massive antenna systems and
massive machine-type communication. We introduce a new variant of the Hard
Thresholding Pursuit (HTP) algorithm referred to as HiHTP. We provide both a
proof of convergence and a recovery guarantee for noisy Gaussian measurements
that exhibit an improved asymptotic scaling in terms of the sampling complexity
in comparison with the usual HTP algorithm. Furthermore, hierarchically sparse
signals and Kronecker product structured measurements naturally arise together
in a variety of applications. We establish the efficient reconstruction of
hierarchically sparse signals from Kronecker product measurements using the
HiHTP algorithm. Additionally, we provide analytical results that connect our
recovery conditions to generalized coherence measures. Again, our recovery
results exhibit substantial improvement in the asymptotic sampling complexity
scaling over the standard setting. Finally, we validate in numerical
experiments that for hierarchically sparse signals, HiHTP performs
significantly better compared to HTP.Comment: 11+4 pages, 5 figures. V3: Incomplete funding information corrected
and minor typos corrected. V4: Change of title and additional author Axel
Flinth. Included new results on Kronecker product measurements and relations
of HiRIP to hierarchical coherence measures. Improved presentation of general
hierarchically sparse signals and correction of minor typo
- …