76,745 research outputs found
Compressive Classification
This paper derives fundamental limits associated with compressive
classification of Gaussian mixture source models. In particular, we offer an
asymptotic characterization of the behavior of the (upper bound to the)
misclassification probability associated with the optimal Maximum-A-Posteriori
(MAP) classifier that depends on quantities that are dual to the concepts of
diversity gain and coding gain in multi-antenna communications. The diversity,
which is shown to determine the rate at which the probability of
misclassification decays in the low noise regime, is shown to depend on the
geometry of the source, the geometry of the measurement system and their
interplay. The measurement gain, which represents the counterpart of the coding
gain, is also shown to depend on geometrical quantities. It is argued that the
diversity order and the measurement gain also offer an optimization criterion
to perform dictionary learning for compressive classification applications.Comment: 5 pages, 3 figures, submitted to the 2013 IEEE International
Symposium on Information Theory (ISIT 2013
Dimension-adaptive bounds on compressive FLD Classification
Efficient dimensionality reduction by random projections (RP) gains popularity, hence the learning guarantees achievable in RP spaces are of great interest. In finite dimensional setting, it has been shown for the compressive Fisher Linear Discriminant (FLD) classifier that forgood generalisation the required target dimension grows only as the log of the number of classes and is not adversely affected by the number of projected data points. However these bounds depend on the dimensionality d of the original data space. In this paper we give further guarantees that remove d from the bounds under certain conditions of regularity on the data density structure. In particular, if the data density does not fill the ambient space then the error of compressive FLD is independent of the ambient dimension and depends only on a notion of ‘intrinsic dimension'
Compressively Sensed Image Recognition
Compressive Sensing (CS) theory asserts that sparse signal reconstruction is
possible from a small number of linear measurements. Although CS enables
low-cost linear sampling, it requires non-linear and costly reconstruction.
Recent literature works show that compressive image classification is possible
in CS domain without reconstruction of the signal. In this work, we introduce a
DCT base method that extracts binary discriminative features directly from CS
measurements. These CS measurements can be obtained by using (i) a random or a
pseudo-random measurement matrix, or (ii) a measurement matrix whose elements
are learned from the training data to optimize the given classification task.
We further introduce feature fusion by concatenating Bag of Words (BoW)
representation of our binary features with one of the two state-of-the-art
CNN-based feature vectors. We show that our fused feature outperforms the
state-of-the-art in both cases.Comment: 6 pages, submitted/accepted, EUVIP 201
Compressive Raman imaging with spatial frequency modulated illumination
We report a line scanning imaging modality of compressive Raman technology
with spatial frequency modulated illumination using a single pixel detector. We
demonstrate the imaging and classification of three different chemical species
at line scan rates of 40 Hz
Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing
For the problem of binary linear classification and feature selection, we
propose algorithmic approaches to classifier design based on the generalized
approximate message passing (GAMP) algorithm, recently proposed in the context
of compressive sensing. We are particularly motivated by problems where the
number of features greatly exceeds the number of training examples, but where
only a few features suffice for accurate classification. We show that
sum-product GAMP can be used to (approximately) minimize the classification
error rate and max-sum GAMP can be used to minimize a wide variety of
regularized loss functions. Furthermore, we describe an
expectation-maximization (EM)-based scheme to learn the associated model
parameters online, as an alternative to cross-validation, and we show that
GAMP's state-evolution framework can be used to accurately predict the
misclassification rate. Finally, we present a detailed numerical study to
confirm the accuracy, speed, and flexibility afforded by our GAMP-based
approaches to binary linear classification and feature selection
- …
