12,314 research outputs found
Multiple pattern classification by sparse subspace decomposition
A robust classification method is developed on the basis of sparse subspace
decomposition. This method tries to decompose a mixture of subspaces of
unlabeled data (queries) into class subspaces as few as possible. Each query is
classified into the class whose subspace significantly contributes to the
decomposed subspace. Multiple queries from different classes can be
simultaneously classified into their respective classes. A practical greedy
algorithm of the sparse subspace decomposition is designed for the
classification. The present method achieves high recognition rate and robust
performance exploiting joint sparsity.Comment: 8 pages, 3 figures, 2nd IEEE International Workshop on Subspace
Methods, Workshop Proceedings of ICCV 200
Joint Block-Sparse Recovery Using Simultaneous BOMP/BOLS
We consider the greedy algorithms for the joint recovery of high-dimensional
sparse signals based on the block multiple measurement vector (BMMV) model in
compressed sensing (CS). To this end, we first put forth two versions of
simultaneous block orthogonal least squares (S-BOLS) as the baseline for the
OLS framework. Their cornerstone is to sequentially check and select the
support block to minimize the residual power. Then, parallel performance
analysis for the existing simultaneous block orthogonal matching pursuit
(S-BOMP) and the two proposed S-BOLS algorithms is developed. It indicates that
under the conditions based on the mutual incoherence property (MIP) and the
decaying magnitude structure of the nonzero blocks of the signal, the
algorithms select all the significant blocks before possibly choosing incorrect
ones. In addition, we further consider the problem of sufficient data volume
for reliable recovery, and provide its MIP-based bounds in closed-form. These
results together highlight the key role of the block characteristic in
addressing the weak-sparse issue, i.e., the scenario where the overall sparsity
is too large. The derived theoretical results are also universally valid for
conventional block-greedy algorithms and non-block algorithms by setting the
number of measurement vectors and the block length to 1, respectively.Comment: This work has been submitted to the IEEE for possible publicatio
Compressed Sensing and Parallel Acquisition
Parallel acquisition systems arise in various applications in order to
moderate problems caused by insufficient measurements in single-sensor systems.
These systems allow simultaneous data acquisition in multiple sensors, thus
alleviating such problems by providing more overall measurements. In this work
we consider the combination of compressed sensing with parallel acquisition. We
establish the theoretical improvements of such systems by providing recovery
guarantees for which, subject to appropriate conditions, the number of
measurements required per sensor decreases linearly with the total number of
sensors. Throughout, we consider two different sampling scenarios -- distinct
(corresponding to independent sampling in each sensor) and identical
(corresponding to dependent sampling between sensors) -- and a general
mathematical framework that allows for a wide range of sensing matrices (e.g.,
subgaussian random matrices, subsampled isometries, random convolutions and
random Toeplitz matrices). We also consider not just the standard sparse signal
model, but also the so-called sparse in levels signal model. This model
includes both sparse and distributed signals and clustered sparse signals. As
our results show, optimal recovery guarantees for both distinct and identical
sampling are possible under much broader conditions on the so-called sensor
profile matrices (which characterize environmental conditions between a source
and the sensors) for the sparse in levels model than for the sparse model. To
verify our recovery guarantees we provide numerical results showing phase
transitions for a number of different multi-sensor environments.Comment: 43 pages, 4 figure
- …