3,720 research outputs found
The generalized Lasso with non-linear observations
We study the problem of signal estimation from non-linear observations when
the signal belongs to a low-dimensional set buried in a high-dimensional space.
A rough heuristic often used in practice postulates that non-linear
observations may be treated as noisy linear observations, and thus the signal
may be estimated using the generalized Lasso. This is appealing because of the
abundance of efficient, specialized solvers for this program. Just as noise may
be diminished by projecting onto the lower dimensional space, the error from
modeling non-linear observations with linear observations will be greatly
reduced when using the signal structure in the reconstruction. We allow general
signal structure, only assuming that the signal belongs to some set K in R^n.
We consider the single-index model of non-linearity. Our theory allows the
non-linearity to be discontinuous, not one-to-one and even unknown. We assume a
random Gaussian model for the measurement matrix, but allow the rows to have an
unknown covariance matrix. As special cases of our results, we recover
near-optimal theory for noisy linear observations, and also give the first
theoretical accuracy guarantee for 1-bit compressed sensing with unknown
covariance matrix of the measurement vectors.Comment: 21 page
New Guarantees for Blind Compressed Sensing
Blind Compressed Sensing (BCS) is an extension of Compressed Sensing (CS)
where the optimal sparsifying dictionary is assumed to be unknown and subject
to estimation (in addition to the CS sparse coefficients). Since the emergence
of BCS, dictionary learning, a.k.a. sparse coding, has been studied as a matrix
factorization problem where its sample complexity, uniqueness and
identifiability have been addressed thoroughly. However, in spite of the strong
connections between BCS and sparse coding, recent results from the sparse
coding problem area have not been exploited within the context of BCS. In
particular, prior BCS efforts have focused on learning constrained and complete
dictionaries that limit the scope and utility of these efforts. In this paper,
we develop new theoretical bounds for perfect recovery for the general
unconstrained BCS problem. These unconstrained BCS bounds cover the case of
overcomplete dictionaries, and hence, they go well beyond the existing BCS
theory. Our perfect recovery results integrate the combinatorial theories of
sparse coding with some of the recent results from low-rank matrix recovery. In
particular, we propose an efficient CS measurement scheme that results in
practical recovery bounds for BCS. Moreover, we discuss the performance of BCS
under polynomial-time sparse coding algorithms.Comment: To appear in the 53rd Annual Allerton Conference on Communication,
Control and Computing, University of Illinois at Urbana-Champaign, IL, USA,
201
Blind Compressed Sensing Over a Structured Union of Subspaces
This paper addresses the problem of simultaneous signal recovery and
dictionary learning based on compressive measurements. Multiple signals are
analyzed jointly, with multiple sensing matrices, under the assumption that the
unknown signals come from a union of a small number of disjoint subspaces. This
problem is important, for instance, in image inpainting applications, in which
the multiple signals are constituted by (incomplete) image patches taken from
the overall image. This work extends standard dictionary learning and
block-sparse dictionary optimization, by considering compressive measurements,
e.g., incomplete data). Previous work on blind compressed sensing is also
generalized by using multiple sensing matrices and relaxing some of the
restrictions on the learned dictionary. Drawing on results developed in the
context of matrix completion, it is proven that both the dictionary and signals
can be recovered with high probability from compressed measurements. The
solution is unique up to block permutations and invertible linear
transformations of the dictionary atoms. The recovery is contingent on the
number of measurements per signal and the number of signals being sufficiently
large; bounds are derived for these quantities. In addition, this paper
presents a computationally practical algorithm that performs dictionary
learning and signal recovery, and establishes conditions for its convergence to
a local optimum. Experimental results for image inpainting demonstrate the
capabilities of the method
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
- …