3,737 research outputs found
Spatial aggregation of local likelihood estimates with applications to classification
This paper presents a new method for spatially adaptive local (constant)
likelihood estimation which applies to a broad class of nonparametric models,
including the Gaussian, Poisson and binary response models. The main idea of
the method is, given a sequence of local likelihood estimates (``weak''
estimates), to construct a new aggregated estimate whose pointwise risk is of
order of the smallest risk among all ``weak'' estimates. We also propose a new
approach toward selecting the parameters of the procedure by providing the
prescribed behavior of the resulting estimate in the simple parametric
situation. We establish a number of important theoretical results concerning
the optimality of the aggregated estimate. In particular, our ``oracle'' result
claims that its risk is, up to some logarithmic multiplier, equal to the
smallest risk for the given family of estimates. The performance of the
procedure is illustrated by application to the classification problem. A
numerical study demonstrates its reasonable performance in simulated and
real-life examples.Comment: Published in at http://dx.doi.org/10.1214/009053607000000271 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Multiclass Learning with Simplex Coding
In this paper we discuss a novel framework for multiclass learning, defined
by a suitable coding/decoding strategy, namely the simplex coding, that allows
to generalize to multiple classes a relaxation approach commonly used in binary
classification. In this framework, a relaxation error analysis can be developed
avoiding constraints on the considered hypotheses class. Moreover, we show that
in this setting it is possible to derive the first provably consistent
regularized method with training/tuning complexity which is independent to the
number of classes. Tools from convex analysis are introduced that can be used
beyond the scope of this paper
Simultaneous adaptation to the margin and to complexity in classification
We consider the problem of adaptation to the margin and to complexity in
binary classification. We suggest an exponential weighting aggregation scheme.
We use this aggregation procedure to construct classifiers which adapt
automatically to margin and complexity. Two main examples are worked out in
which adaptivity is achieved in frameworks proposed by Steinwart and Scovel
[Learning Theory. Lecture Notes in Comput. Sci. 3559 (2005) 279--294. Springer,
Berlin; Ann. Statist. 35 (2007) 575--607] and Tsybakov [Ann. Statist. 32 (2004)
135--166]. Adaptive schemes, like ERM or penalized ERM, usually involve a
minimization step. This is not the case for our procedure.Comment: Published in at http://dx.doi.org/10.1214/009053607000000055 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means
Bayesian classification labels observations based on given prior information,
namely class-a priori and class-conditional probabilities. Bayes' risk is the
minimum expected classification cost that is achieved by the Bayes' test, the
optimal decision rule. When no cost incurs for correct classification and unit
cost is charged for misclassification, Bayes' test reduces to the maximum a
posteriori decision rule, and Bayes risk simplifies to Bayes' error, the
probability of error. Since calculating this probability of error is often
intractable, several techniques have been devised to bound it with closed-form
formula, introducing thereby measures of similarity and divergence between
distributions like the Bhattacharyya coefficient and its associated
Bhattacharyya distance. The Bhattacharyya upper bound can further be tightened
using the Chernoff information that relies on the notion of best error
exponent. In this paper, we first express Bayes' risk using the total variation
distance on scaled distributions. We then elucidate and extend the
Bhattacharyya and the Chernoff upper bound mechanisms using generalized
weighted means. We provide as a byproduct novel notions of statistical
divergences and affinity coefficients. We illustrate our technique by deriving
new upper bounds for the univariate Cauchy and the multivariate
-distributions, and show experimentally that those bounds are not too
distant to the computationally intractable Bayes' error.Comment: 22 pages, include R code. To appear in Pattern Recognition Letter
Bandwidth choice for nonparametric classification
It is shown that, for kernel-based classification with univariate
distributions and two populations, optimal bandwidth choice has a dichotomous
character. If the two densities cross at just one point, where their curvatures
have the same signs, then minimum Bayes risk is achieved using bandwidths which
are an order of magnitude larger than those which minimize pointwise estimation
error. On the other hand, if the curvature signs are different, or if there are
multiple crossing points, then bandwidths of conventional size are generally
appropriate. The range of different modes of behavior is narrower in
multivariate settings. There, the optimal size of bandwidth is generally the
same as that which is appropriate for pointwise density estimation. These
properties motivate empirical rules for bandwidth choice
Fast learning rates for plug-in classifiers
It has been recently shown that, under the margin (or low noise) assumption,
there exist classifiers attaining fast rates of convergence of the excess Bayes
risk, that is, rates faster than . The work on this subject has
suggested the following two conjectures: (i) the best achievable fast rate is
of the order , and (ii) the plug-in classifiers generally converge more
slowly than the classifiers based on empirical risk minimization. We show that
both conjectures are not correct. In particular, we construct plug-in
classifiers that can achieve not only fast, but also super-fast rates, that is,
rates faster than . We establish minimax lower bounds showing that the
obtained rates cannot be improved.Comment: Published at http://dx.doi.org/10.1214/009053606000001217 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Bandwidth choice for nonparametric classification
It is shown that, for kernel-based classification with univariate
distributions and two populations, optimal bandwidth choice has a dichotomous
character. If the two densities cross at just one point, where their curvatures
have the same signs, then minimum Bayes risk is achieved using bandwidths which
are an order of magnitude larger than those which minimize pointwise estimation
error. On the other hand, if the curvature signs are different, or if there are
multiple crossing points, then bandwidths of conventional size are generally
appropriate. The range of different modes of behavior is narrower in
multivariate settings. There, the optimal size of bandwidth is generally the
same as that which is appropriate for pointwise density estimation. These
properties motivate empirical rules for bandwidth choice.Comment: Published at http://dx.doi.org/10.1214/009053604000000959 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …