5,056 research outputs found
Direct Ensemble Estimation of Density Functionals
Estimating density functionals of analog sources is an important problem in
statistical signal processing and information theory. Traditionally, estimating
these quantities requires either making parametric assumptions about the
underlying distributions or using non-parametric density estimation followed by
integration. In this paper we introduce a direct nonparametric approach which
bypasses the need for density estimation by using the error rates of k-NN
classifiers asdata-driven basis functions that can be combined to estimate a
range of density functionals. However, this method is subject to a non-trivial
bias that dramatically slows the rate of convergence in higher dimensions. To
overcome this limitation, we develop an ensemble method for estimating the
value of the basis function which, under some minor constraints on the
smoothness of the underlying distributions, achieves the parametric rate of
convergence regardless of data dimension.Comment: 5 page
Nonparametrically consistent depth-based classifiers
We introduce a class of depth-based classification procedures that are of a
nearest-neighbor nature. Depth, after symmetrization, indeed provides the
center-outward ordering that is necessary and sufficient to define nearest
neighbors. Like all their depth-based competitors, the resulting classifiers
are affine-invariant, hence in particular are insensitive to unit changes.
Unlike the former, however, the latter achieve Bayes consistency under
virtually any absolutely continuous distributions - a concept we call
nonparametric consistency, to stress the difference with the stronger universal
consistency of the standard NN classifiers. We investigate the finite-sample
performances of the proposed classifiers through simulations and show that they
outperform affine-invariant nearest-neighbor classifiers obtained through an
obvious standardization construction. We illustrate the practical value of our
classifiers on two real data examples. Finally, we shortly discuss the possible
uses of our depth-based neighbors in other inference problems.Comment: Published at http://dx.doi.org/10.3150/13-BEJ561 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Unexpected properties of bandwidth choice when smoothing discrete data for constructing a functional data classifier
The data functions that are studied in the course of functional data analysis
are assembled from discrete data, and the level of smoothing that is used is
generally that which is appropriate for accurate approximation of the
conceptually smooth functions that were not actually observed. Existing
literature shows that this approach is effective, and even optimal, when using
functional data methods for prediction or hypothesis testing. However, in the
present paper we show that this approach is not effective in classification
problems. There a useful rule of thumb is that undersmoothing is often
desirable, but there are several surprising qualifications to that approach.
First, the effect of smoothing the training data can be more significant than
that of smoothing the new data set to be classified; second, undersmoothing is
not always the right approach, and in fact in some cases using a relatively
large bandwidth can be more effective; and third, these perverse results are
the consequence of very unusual properties of error rates, expressed as
functions of smoothing parameters. For example, the orders of magnitude of
optimal smoothing parameter choices depend on the signs and sizes of terms in
an expansion of error rate, and those signs and sizes can vary dramatically
from one setting to another, even for the same classifier.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1158 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On accuracy of PDF divergence estimators and their applicability to representative data sampling
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This paper investigates the possibility of using various probability density function divergence measures for the purpose of representative data sampling. As it turned out, the first difficulty one needs to deal with is estimation of the divergence itself. In contrast to other publications on this subject, the experimental results provided in this study show that in many cases it is not possible unless samples consisting of thousands of instances are used. Exhaustive experiments on the divergence guided representative data sampling have been performed using 26 publicly available benchmark datasets and 70 PDF divergence estimators, and their results have been analysed and discussed
- ā¦