51 research outputs found
Improved Vapnik Cervonenkis bounds
We give a new proof of VC bounds where we avoid the use of symmetrization and
use a shadow sample of arbitrary size. We also improve on the variance term.
This results in better constants, as shown on numerical examples. Moreover our
bounds still hold for non identically distributed independent random variables.
Keywords: Statistical learning theory, PAC-Bayesian theorems, VC dimension
Toric grammars: a new statistical approach to natural language modeling
We propose a new statistical model for computational linguistics. Rather than
trying to estimate directly the probability distribution of a random sentence
of the language, we define a Markov chain on finite sets of sentences with many
finite recurrent communicating classes and define our language model as the
invariant probability measures of the chain on each recurrent communicating
class. This Markov chain, that we call a communication model, recombines at
each step randomly the set of sentences forming its current state, using some
grammar rules. When the grammar rules are fixed and known in advance instead of
being estimated on the fly, we can prove supplementary mathematical properties.
In particular, we can prove in this case that all states are recurrent states,
so that the chain defines a partition of its state space into finite recurrent
communicating classes. We show that our approach is a decisive departure from
Markov models at the sentence level and discuss its relationships with Context
Free Grammars. Although the toric grammars we use are closely related to
Context Free Grammars, the way we generate the language from the grammar is
qualitatively different. Our communication model has two purposes. On the one
hand, it is used to define indirectly the probability distribution of a random
sentence of the language. On the other hand it can serve as a (crude) model of
language transmission from one speaker to another speaker through the
communication of a (large) set of sentences
Robust linear least squares regression
We consider the problem of robustly predicting as well as the best linear
combination of given functions in least squares regression, and variants of
this problem including constraints on the parameters of the linear combination.
For the ridge estimator and the ordinary least squares estimator, and their
variants, we provide new risk bounds of order without logarithmic factor
unlike some standard results, where is the size of the training data. We
also provide a new estimator with better deviations in the presence of
heavy-tailed noise. It is based on truncating differences of losses in a
min--max framework and satisfies a risk bound both in expectation and in
deviations. The key common surprising factor of these results is the absence of
exponential moment condition on the output distribution while achieving
exponential deviations. All risk bounds are obtained through a PAC-Bayesian
analysis on truncated differences of losses. Experimental results strongly back
up our truncated min--max estimator.Comment: Published in at http://dx.doi.org/10.1214/11-AOS918 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org). arXiv admin note: significant text
overlap with arXiv:0902.173
Challenging the empirical mean and empirical variance: a deviation study
We present new M-estimators of the mean and variance of real valued random
variables, based on PAC-Bayes bounds. We analyze the non-asymptotic minimax
properties of the deviations of those estimators for sample distributions
having either a bounded variance or a bounded variance and a bounded kurtosis.
Under those weak hypotheses, allowing for heavy-tailed distributions, we show
that the worst case deviations of the empirical mean are suboptimal. We prove
indeed that for any confidence level, there is some M-estimator whose
deviations are of the same order as the deviations of the empirical mean of a
Gaussian statistical sample, even when the statistical sample is instead
heavy-tailed. Experiments reveal that these new estimators perform even better
than predicted by our bounds, showing deviation quantile functions uniformly
lower at all probability levels than the empirical mean for non Gaussian sample
distributions as simple as the mixture of two Gaussian measures.Comment: Second version presents an improved variance estimate in section 4.
Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning
This monograph deals with adaptive supervised classification, using tools
borrowed from statistical mechanics and information theory, stemming from the
PACBayesian approach pioneered by David McAllester and applied to a conception
of statistical learning theory forged by Vladimir Vapnik. Using convex analysis
on the set of posterior probability measures, we show how to get local measures
of the complexity of the classification model involving the relative entropy of
posterior distributions with respect to Gibbs posterior measures. We then
discuss relative bounds, comparing the generalization error of two
classification rules, showing how the margin assumption of Mammen and Tsybakov
can be replaced with some empirical measure of the covariance structure of the
classification model.We show how to associate to any posterior distribution an
effective temperature relating it to the Gibbs prior distribution with the same
level of expected error rate, and how to estimate this effective temperature
from data, resulting in an estimator whose expected error rate converges
according to the best possible power of the sample size adaptively under any
margin and parametric complexity assumptions. We describe and study an
alternative selection scheme based on relative bounds between estimators, and
present a two step localization technique which can handle the selection of a
parametric model from a family of those. We show how to extend systematically
all the results obtained in the inductive setting to transductive learning, and
use this to improve Vapnik's generalization bounds, extending them to the case
when the sample is made of independent non-identically distributed pairs of
patterns and labels. Finally we review briefly the construction of Support
Vector Machines and show how to derive generalization bounds for them,
measuring the complexity either through the number of support vectors or
through the value of the transductive or inductive margin.Comment: Published in at http://dx.doi.org/10.1214/074921707000000391 the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …