24,607 research outputs found
Fast DD-classification of functional data
A fast nonparametric procedure for classifying functional data is introduced.
It consists of a two-step transformation of the original data plus a classifier
operating on a low-dimensional hypercube. The functional data are first mapped
into a finite-dimensional location-slope space and then transformed by a
multivariate depth function into the -plot, which is a subset of the unit
hypercube. This transformation yields a new notion of depth for functional
data. Three alternative depth functions are employed for this, as well as two
rules for the final classification on . The resulting classifier has
to be cross-validated over a small range of parameters only, which is
restricted by a Vapnik-Cervonenkis bound. The entire methodology does not
involve smoothing techniques, is completely nonparametric and allows to achieve
Bayes optimality under standard distributional settings. It is robust,
efficiently computable, and has been implemented in an R environment.
Applicability of the new approach is demonstrated by simulations as well as a
benchmark study
Adaptivity to Noise Parameters in Nonparametric Active Learning
This work addresses various open questions in the theory of active learning
for nonparametric classification. Our contributions are both statistical and
algorithmic: -We establish new minimax-rates for active learning under common
\textit{noise conditions}. These rates display interesting transitions -- due
to the interaction between noise \textit{smoothness and margin} -- not present
in the passive setting. Some such transitions were previously conjectured, but
remained unconfirmed. -We present a generic algorithmic strategy for adaptivity
to unknown noise smoothness and margin; our strategy achieves optimal rates in
many general situations; furthermore, unlike in previous work, we avoid the
need for \textit{adaptive confidence sets}, resulting in strictly milder
distributional requirements
The DD-classifier in the functional setting
The Maximum Depth was the first attempt to use data depths instead of
multivariate raw data to construct a classification rule. Recently, the
DD-classifier has solved several serious limitations of the Maximum Depth
classifier but some issues still remain. This paper is devoted to extending the
DD-classifier in the following ways: first, to surpass the limitation of the
DD-classifier when more than two groups are involved. Second to apply regular
classification methods (like NN, linear or quadratic classifiers, recursive
partitioning,...) to DD-plots to obtain useful insights through the diagnostics
of these methods. And third, to integrate different sources of information
(data depths or multivariate functional data) in a unified way in the
classification procedure. Besides, as the DD-classifier trick is especially
useful in the functional framework, an enhanced revision of several functional
data depths is done in the paper. A simulation study and applications to some
classical real datasets are also provided showing the power of the new
proposal.Comment: 29 pages, 6 figures, 6 tables, Supplemental R Code and Dat
Nonparametrically consistent depth-based classifiers
We introduce a class of depth-based classification procedures that are of a
nearest-neighbor nature. Depth, after symmetrization, indeed provides the
center-outward ordering that is necessary and sufficient to define nearest
neighbors. Like all their depth-based competitors, the resulting classifiers
are affine-invariant, hence in particular are insensitive to unit changes.
Unlike the former, however, the latter achieve Bayes consistency under
virtually any absolutely continuous distributions - a concept we call
nonparametric consistency, to stress the difference with the stronger universal
consistency of the standard NN classifiers. We investigate the finite-sample
performances of the proposed classifiers through simulations and show that they
outperform affine-invariant nearest-neighbor classifiers obtained through an
obvious standardization construction. We illustrate the practical value of our
classifiers on two real data examples. Finally, we shortly discuss the possible
uses of our depth-based neighbors in other inference problems.Comment: Published at http://dx.doi.org/10.3150/13-BEJ561 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
- …