1,137 research outputs found
Analyzing Least Squares and Kalman Filtered Compressed Sensing
In recent work, we studied the problem of causally reconstructing time
sequences of spatially sparse signals, with unknown and slow time-varying
sparsity patterns, from a limited number of linear "incoherent" measurements.
We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The
key idea is to run a reduced order KF only for the current signal's estimated
nonzero coefficients' set, while performing CS on the Kalman filtering error to
estimate new additions, if any, to the set. KF may be replaced by Least Squares
(LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we
bound the error in performing CS on the LS error and (b) we obtain the
conditions under which the KF-CS (or LS-CS) estimate converges to that of a
genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.Comment: Proc. IEEE Intl. Conf. Acous. Speech Sig. Proc. (ICASSP), 200
LS-CS-residual (LS-CS): Compressive Sensing on Least Squares Residual
We consider the problem of recursively and causally reconstructing time
sequences of sparse signals (with unknown and time-varying sparsity patterns)
from a limited number of noisy linear measurements. The sparsity pattern is
assumed to change slowly with time. The idea of our proposed solution,
LS-CS-residual (LS-CS), is to replace compressed sensing (CS) on the
observation by CS on the least squares (LS) residual computed using the
previous estimate of the support. We bound CS-residual error and show that when
the number of available measurements is small, the bound is much smaller than
that on CS error if the sparsity pattern changes slowly enough. We also obtain
conditions for "stability" of LS-CS over time for a signal model that allows
support additions and removals, and that allows coefficients to gradually
increase (decrease) until they reach a constant value (become zero). By
"stability", we mean that the number of misses and extras in the support
estimate remain bounded by time-invariant values (in turn implying a
time-invariant bound on LS-CS error). The concept is meaningful only if the
bounds are small compared to the support size. Numerical experiments backing
our claims are shown.Comment: Accepted (with mandatory minor revisions) to IEEE Trans. Signal
Processing. 12 pages, 5 figure
Exact Reconstruction Conditions for Regularized Modified Basis Pursuit
In this correspondence, we obtain exact recovery conditions for regularized
modified basis pursuit (reg-mod-BP) and discuss when the obtained conditions
are weaker than those for modified-CS or for basis pursuit (BP). The discussion
is also supported by simulation comparisons. Reg-mod-BP provides a solution to
the sparse recovery problem when both an erroneous estimate of the signal's
support, denoted by , and an erroneous estimate of the signal values on
are available.Comment: 17 page
Dynamic Sparse State Estimation Using â„“1-â„“1 Minimization: Adaptive-rate Measurement Bounds, Algorithms and Applications
We propose a recursive algorithm for estimating time-varying signals from a few linear measurements. The signals are assumed sparse, with unknown support, and are described by a dynamical model. In each iteration, the algorithm solves an ℓ1-ℓ1 minimization problem and estimates the number of measurements that it has to take at the next iteration. These estimates are computed based on recent theoretical results for ℓ1-ℓ1 minimization. We also provide sufficient conditions for perfect signal reconstruction at each time instant as a function of an algorithm parameter. The algorithm exhibits high performance in compressive tracking on a real video sequence, as shown in our experimental results. Index Terms— State estimation, sparsity, background subtraction, motion estimation, online algorithm
- …