19,035 research outputs found
Large Margin Image Set Representation and Classification
In this paper, we propose a novel image set representation and classification
method by maximizing the margin of image sets. The margin of an image set is
defined as the difference of the distance to its nearest image set from
different classes and the distance to its nearest image set of the same class.
By modeling the image sets by using both their image samples and their affine
hull models, and maximizing the margins of the images sets, the image set
representation parameter learning problem is formulated as an minimization
problem, which is further optimized by an expectation -maximization (EM)
strategy with accelerated proximal gradient (APG) optimization in an iterative
algorithm. To classify a given test image set, we assign it to the class which
could provide the largest margin. Experiments on two applications of
video-sequence-based face recognition demonstrate that the proposed method
significantly outperforms state-of-the-art image set classification methods in
terms of both effectiveness and efficiency
Noise-adaptive Margin-based Active Learning and Lower Bounds under Tsybakov Noise Condition
We present a simple noise-robust margin-based active learning algorithm to
find homogeneous (passing the origin) linear separators and analyze its error
convergence when labels are corrupted by noise. We show that when the imposed
noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others
1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise
and achieves optimal statistical rate up to poly-logarithmic factors. We also
derive lower bounds for margin based active learning algorithms under Tsybakov
noise conditions (TNC) for the membership query synthesis scenario (Angluin
1988). Our result implies lower bounds for the stream based selective sampling
scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite
surprisingly, we show that the sample complexity cannot be improved even if the
underlying data distribution is as simple as the uniform distribution on the
unit ball. Our proof involves the construction of a well separated hypothesis
set on the d-dimensional unit ball along with carefully designed label
distributions for the Tsybakov noise condition. Our analysis might provide
insights for other forms of lower bounds as well.Comment: 16 pages, 2 figures. An abridged version to appear in Thirtieth AAAI
Conference on Artificial Intelligence (AAAI), which is held in Phoenix, AZ
USA in 201
On the Sample Complexity of Predictive Sparse Coding
The goal of predictive sparse coding is to learn a representation of examples
as sparse linear combinations of elements from a dictionary, such that a
learned hypothesis linear in the new representation performs well on a
predictive task. Predictive sparse coding algorithms recently have demonstrated
impressive performance on a variety of supervised tasks, but their
generalization properties have not been studied. We establish the first
generalization error bounds for predictive sparse coding, covering two
settings: 1) the overcomplete setting, where the number of features k exceeds
the original dimensionality d; and 2) the high or infinite-dimensional setting,
where only dimension-free bounds are useful. Both learning bounds intimately
depend on stability properties of the learned sparse encoder, as measured on
the training sample. Consequently, we first present a fundamental stability
result for the LASSO, a result characterizing the stability of the sparse codes
with respect to perturbations to the dictionary. In the overcomplete setting,
we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with
respect to d and k. In the high or infinite-dimensional setting, we show a
dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and
s, where s is an upper bound on the number of non-zeros in the sparse code for
any training data point.Comment: Sparse Coding Stability Theorem from version 1 has been relaxed
considerably using a new notion of coding margin. Old Sparse Coding Stability
Theorem still in new version, now as Theorem 2. Presentation of all proofs
simplified/improved considerably. Paper reorganized. Empirical analysis
showing new coding margin is non-trivial on real dataset
- …