7,652 research outputs found
Projection-Based and Look Ahead Strategies for Atom Selection
In this paper, we improve iterative greedy search algorithms in which atoms
are selected serially over iterations, i.e., one-by-one over iterations. For
serial atom selection, we devise two new schemes to select an atom from a set
of potential atoms in each iteration. The two new schemes lead to two new
algorithms. For both the algorithms, in each iteration, the set of potential
atoms is found using a standard matched filter. In case of the first scheme, we
propose an orthogonal projection strategy that selects an atom from the set of
potential atoms. Then, for the second scheme, we propose a look ahead strategy
such that the selection of an atom in the current iteration has an effect on
the future iterations. The use of look ahead strategy requires a higher
computational resource. To achieve a trade-off between performance and
complexity, we use the two new schemes in cascade and develop a third new
algorithm. Through experimental evaluations, we compare the proposed algorithms
with existing greedy search and convex relaxation algorithms.Comment: sparsity, compressive sensing; IEEE Trans on Signal Processing 201
Greed is good: algorithmic results for sparse approximation
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms
Multi-task additive models with shared transfer functions based on dictionary learning
Additive models form a widely popular class of regression models which
represent the relation between covariates and response variables as the sum of
low-dimensional transfer functions. Besides flexibility and accuracy, a key
benefit of these models is their interpretability: the transfer functions
provide visual means for inspecting the models and identifying domain-specific
relations between inputs and outputs. However, in large-scale problems
involving the prediction of many related tasks, learning independently additive
models results in a loss of model interpretability, and can cause overfitting
when training data is scarce. We introduce a novel multi-task learning approach
which provides a corpus of accurate and interpretable additive models for a
large number of related forecasting tasks. Our key idea is to share transfer
functions across models in order to reduce the model complexity and ease the
exploration of the corpus. We establish a connection with sparse dictionary
learning and propose a new efficient fitting algorithm which alternates between
sparse coding and transfer function updates. The former step is solved via an
extension of Orthogonal Matching Pursuit, whose properties are analyzed using a
novel recovery condition which extends existing results in the literature. The
latter step is addressed using a traditional dictionary update rule.
Experiments on real-world data demonstrate that our approach compares favorably
to baseline methods while yielding an interpretable corpus of models, revealing
structure among the individual tasks and being more robust when training data
is scarce. Our framework therefore extends the well-known benefits of additive
models to common regression settings possibly involving thousands of tasks
Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic Cost Model
Best-first search has been recently utilized for compressed sensing (CS) by
the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we
concentrate on theoretical and empirical analyses of A*OMP. We present a
restricted isometry property (RIP) based general condition for exact recovery
of sparse signals via A*OMP. In addition, we develop online guarantees which
promise improved recovery performance with the residue-based termination
instead of the sparsity-based one. We demonstrate the recovery capabilities of
A*OMP with extensive recovery simulations using the adaptive-multiplicative
(AMul) cost model, which effectively compensates for the path length
differences in the search tree. The presented results, involving phase
transitions for different nonzero element distributions as well as recovery
rates and average error, reveal not only the superior recovery accuracy of
A*OMP, but also the improvements with the residue-based termination and the
AMul cost model. Comparison of the run times indicate the speed up by the AMul
cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the
search further. Finally, we run A*OMP on a sparse image to illustrate its
recovery performance for more realistic coefcient distributions
Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries
Compressive sensing (CS) has recently emerged as a powerful framework for
acquiring sparse signals. The bulk of the CS literature has focused on the case
where the acquired signal has a sparse or compressible representation in an
orthonormal basis. In practice, however, there are many signals that cannot be
sparsely represented or approximated using an orthonormal basis, but that do
have sparse representations in a redundant dictionary. Standard results in CS
can sometimes be extended to handle this case provided that the dictionary is
sufficiently incoherent or well-conditioned, but these approaches fail to
address the case of a truly redundant or overcomplete dictionary. In this paper
we describe a variant of the iterative recovery algorithm CoSaMP for this more
challenging setting. We utilize the D-RIP, a condition on the sensing matrix
analogous to the well-known restricted isometry property. In contrast to prior
work, the method and analysis are "signal-focused"; that is, they are oriented
around recovering the signal rather than its dictionary coefficients. Under the
assumption that we have a near-optimal scheme for projecting vectors in signal
space onto the model family of candidate sparse signals, we provide provable
recovery guarantees. Developing a practical algorithm that can provably compute
the required near-optimal projections remains a significant open problem, but
we include simulation results using various heuristics that empirically exhibit
superior performance to traditional recovery algorithms
- …