16,983 research outputs found
On the Sample Complexity of Predictive Sparse Coding
The goal of predictive sparse coding is to learn a representation of examples
as sparse linear combinations of elements from a dictionary, such that a
learned hypothesis linear in the new representation performs well on a
predictive task. Predictive sparse coding algorithms recently have demonstrated
impressive performance on a variety of supervised tasks, but their
generalization properties have not been studied. We establish the first
generalization error bounds for predictive sparse coding, covering two
settings: 1) the overcomplete setting, where the number of features k exceeds
the original dimensionality d; and 2) the high or infinite-dimensional setting,
where only dimension-free bounds are useful. Both learning bounds intimately
depend on stability properties of the learned sparse encoder, as measured on
the training sample. Consequently, we first present a fundamental stability
result for the LASSO, a result characterizing the stability of the sparse codes
with respect to perturbations to the dictionary. In the overcomplete setting,
we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with
respect to d and k. In the high or infinite-dimensional setting, we show a
dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and
s, where s is an upper bound on the number of non-zeros in the sparse code for
any training data point.Comment: Sparse Coding Stability Theorem from version 1 has been relaxed
considerably using a new notion of coding margin. Old Sparse Coding Stability
Theorem still in new version, now as Theorem 2. Presentation of all proofs
simplified/improved considerably. Paper reorganized. Empirical analysis
showing new coding margin is non-trivial on real dataset
Classification and Geometry of General Perceptual Manifolds
Perceptual manifolds arise when a neural population responds to an ensemble
of sensory signals associated with different physical features (e.g.,
orientation, pose, scale, location, and intensity) of the same perceptual
object. Object recognition and discrimination requires classifying the
manifolds in a manner that is insensitive to variability within a manifold. How
neuronal systems give rise to invariant object classification and recognition
is a fundamental problem in brain theory as well as in machine learning. Here
we study the ability of a readout network to classify objects from their
perceptual manifold representations. We develop a statistical mechanical theory
for the linear classification of manifolds with arbitrary geometry revealing a
remarkable relation to the mathematics of conic decomposition. Novel
geometrical measures of manifold radius and manifold dimension are introduced
which can explain the classification capacity for manifolds of various
geometries. The general theory is demonstrated on a number of representative
manifolds, including L2 ellipsoids prototypical of strictly convex manifolds,
L1 balls representing polytopes consisting of finite sample points, and
orientation manifolds which arise from neurons tuned to respond to a continuous
angle variable, such as object orientation. The effects of label sparsity on
the classification capacity of manifolds are elucidated, revealing a scaling
relation between label sparsity and manifold radius. Theoretical predictions
are corroborated by numerical simulations using recently developed algorithms
to compute maximum margin solutions for manifold dichotomies. Our theory and
its extensions provide a powerful and rich framework for applying statistical
mechanics of linear classification to data arising from neuronal responses to
object stimuli, as well as to artificial deep networks trained for object
recognition tasks.Comment: 24 pages, 12 figures, Supplementary Material
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Scoring systems are linear classification models that only require users to
add, subtract and multiply a few small numbers in order to make a prediction.
These models are in widespread use by the medical community, but are difficult
to learn from data because they need to be accurate and sparse, have coprime
integer coefficients, and satisfy multiple operational constraints. We present
a new method for creating data-driven scoring systems called a Supersparse
Linear Integer Model (SLIM). SLIM scoring systems are built by solving an
integer program that directly encodes measures of accuracy (the 0-1 loss) and
sparsity (the -seminorm) while restricting coefficients to coprime
integers. SLIM can seamlessly incorporate a wide range of operational
constraints related to accuracy and sparsity, and can produce highly tailored
models without parameter tuning. We provide bounds on the testing and training
accuracy of SLIM scoring systems, and present a new data reduction technique
that can improve scalability by eliminating a portion of the training data
beforehand. Our paper includes results from a collaboration with the
Massachusetts General Hospital Sleep Laboratory, where SLIM was used to create
a highly tailored scoring system for sleep apnea screeningComment: This version reflects our findings on SLIM as of January 2016
(arXiv:1306.5860 and arXiv:1405.4047 are out-of-date). The final published
version of this articled is available at http://www.springerlink.co
- β¦