2,316 research outputs found
Collaborative Representation based Classification for Face Recognition
By coding a query sample as a sparse linear combination of all training
samples and then classifying it by evaluating which class leads to the minimal
coding residual, sparse representation based classification (SRC) leads to
interesting results for robust face recognition. It is widely believed that the
l1- norm sparsity constraint on coding coefficients plays a key role in the
success of SRC, while its use of all training samples to collaboratively
represent the query sample is rather ignored. In this paper we discuss how SRC
works, and show that the collaborative representation mechanism used in SRC is
much more crucial to its success of face classification. The SRC is a special
case of collaborative representation based classification (CRC), which has
various instantiations by applying different norms to the coding residual and
coding coefficient. More specifically, the l1 or l2 norm characterization of
coding residual is related to the robustness of CRC to outlier facial pixels,
while the l1 or l2 norm characterization of coding coefficient is related to
the degree of discrimination of facial features. Extensive experiments were
conducted to verify the face recognition accuracy and efficiency of CRC with
different instantiations.Comment: It is a substantial revision of a previous conference paper (L.
Zhang, M. Yang, et al. "Sparse Representation or Collaborative
Representation: Which Helps Face Recognition?" in ICCV 2011
Linear Spatial Pyramid Matching Using Non-convex and non-negative Sparse Coding for Image Classification
Recently sparse coding have been highly successful in image classification
mainly due to its capability of incorporating the sparsity of image
representation. In this paper, we propose an improved sparse coding model based
on linear spatial pyramid matching(SPM) and Scale Invariant Feature Transform
(SIFT ) descriptors. The novelty is the simultaneous non-convex and
non-negative characters added to the sparse coding model. Our numerical
experiments show that the improved approach using non-convex and non-negative
sparse coding is superior than the original ScSPM[1] on several typical
databases
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Scoring systems are linear classification models that only require users to
add, subtract and multiply a few small numbers in order to make a prediction.
These models are in widespread use by the medical community, but are difficult
to learn from data because they need to be accurate and sparse, have coprime
integer coefficients, and satisfy multiple operational constraints. We present
a new method for creating data-driven scoring systems called a Supersparse
Linear Integer Model (SLIM). SLIM scoring systems are built by solving an
integer program that directly encodes measures of accuracy (the 0-1 loss) and
sparsity (the -seminorm) while restricting coefficients to coprime
integers. SLIM can seamlessly incorporate a wide range of operational
constraints related to accuracy and sparsity, and can produce highly tailored
models without parameter tuning. We provide bounds on the testing and training
accuracy of SLIM scoring systems, and present a new data reduction technique
that can improve scalability by eliminating a portion of the training data
beforehand. Our paper includes results from a collaboration with the
Massachusetts General Hospital Sleep Laboratory, where SLIM was used to create
a highly tailored scoring system for sleep apnea screeningComment: This version reflects our findings on SLIM as of January 2016
(arXiv:1306.5860 and arXiv:1405.4047 are out-of-date). The final published
version of this articled is available at http://www.springerlink.co
Similarity Learning for Provably Accurate Sparse Linear Classification
In recent years, the crucial importance of metrics in machine learning
algorithms has led to an increasing interest for optimizing distance and
similarity functions. Most of the state of the art focus on learning
Mahalanobis distances (requiring to fulfill a constraint of positive
semi-definiteness) for use in a local k-NN algorithm. However, no theoretical
link is established between the learned metrics and their performance in
classification. In this paper, we make use of the formal framework of good
similarities introduced by Balcan et al. to design an algorithm for learning a
non PSD linear similarity optimized in a nonlinear feature space, which is then
used to build a global linear classifier. We show that our approach has uniform
stability and derive a generalization bound on the classification error.
Experiments performed on various datasets confirm the effectiveness of our
approach compared to state-of-the-art methods and provide evidence that (i) it
is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.Comment: Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012
Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing
For the problem of binary linear classification and feature selection, we
propose algorithmic approaches to classifier design based on the generalized
approximate message passing (GAMP) algorithm, recently proposed in the context
of compressive sensing. We are particularly motivated by problems where the
number of features greatly exceeds the number of training examples, but where
only a few features suffice for accurate classification. We show that
sum-product GAMP can be used to (approximately) minimize the classification
error rate and max-sum GAMP can be used to minimize a wide variety of
regularized loss functions. Furthermore, we describe an
expectation-maximization (EM)-based scheme to learn the associated model
parameters online, as an alternative to cross-validation, and we show that
GAMP's state-evolution framework can be used to accurately predict the
misclassification rate. Finally, we present a detailed numerical study to
confirm the accuracy, speed, and flexibility afforded by our GAMP-based
approaches to binary linear classification and feature selection
Scalable Greedy Algorithms for Transfer Learning
In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples
- …