1,358 research outputs found
Estimating conditional quantiles with the help of the pinball loss
The so-called pinball loss for estimating conditional quantiles is a
well-known tool in both statistics and machine learning. So far, however, only
little work has been done to quantify the efficiency of this tool for
nonparametric approaches. We fill this gap by establishing inequalities that
describe how close approximate pinball risk minimizers are to the corresponding
conditional quantile. These inequalities, which hold under mild assumptions on
the data-generating distribution, are then used to establish so-called variance
bounds, which recently turned out to play an important role in the statistical
analysis of (regularized) empirical risk minimization approaches. Finally, we
use both types of inequalities to establish an oracle inequality for support
vector machines that use the pinball loss. The resulting learning rates are
min--max optimal under some standard regularity assumptions on the conditional
quantile.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ267 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Anisotropic oracle inequalities in noisy quantization
The effect of errors in variables in quantization is investigated. We prove
general exact and non-exact oracle inequalities with fast rates for an
empirical minimization based on a noisy sample
, where are i.i.d. with density and
are i.i.d. with density . These rates depend on the geometry
of the density and the asymptotic behaviour of the characteristic function
of .
This general study can be applied to the problem of -means clustering with
noisy data. For this purpose, we introduce a deconvolution -means stochastic
minimization which reaches fast rates of convergence under standard Pollard's
regularity assumptions.Comment: 30 pages. arXiv admin note: text overlap with arXiv:1205.141
Simultaneous adaptation to the margin and to complexity in classification
We consider the problem of adaptation to the margin and to complexity in
binary classification. We suggest an exponential weighting aggregation scheme.
We use this aggregation procedure to construct classifiers which adapt
automatically to margin and complexity. Two main examples are worked out in
which adaptivity is achieved in frameworks proposed by Steinwart and Scovel
[Learning Theory. Lecture Notes in Comput. Sci. 3559 (2005) 279--294. Springer,
Berlin; Ann. Statist. 35 (2007) 575--607] and Tsybakov [Ann. Statist. 32 (2004)
135--166]. Adaptive schemes, like ERM or penalized ERM, usually involve a
minimization step. This is not the case for our procedure.Comment: Published in at http://dx.doi.org/10.1214/009053607000000055 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …