14,519 research outputs found
One-Class Classification: Taxonomy of Study and Review of Techniques
One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure
Fast DD-classification of functional data
A fast nonparametric procedure for classifying functional data is introduced.
It consists of a two-step transformation of the original data plus a classifier
operating on a low-dimensional hypercube. The functional data are first mapped
into a finite-dimensional location-slope space and then transformed by a
multivariate depth function into the -plot, which is a subset of the unit
hypercube. This transformation yields a new notion of depth for functional
data. Three alternative depth functions are employed for this, as well as two
rules for the final classification on . The resulting classifier has
to be cross-validated over a small range of parameters only, which is
restricted by a Vapnik-Cervonenkis bound. The entire methodology does not
involve smoothing techniques, is completely nonparametric and allows to achieve
Bayes optimality under standard distributional settings. It is robust,
efficiently computable, and has been implemented in an R environment.
Applicability of the new approach is demonstrated by simulations as well as a
benchmark study
The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating
For data sets populated by a very well modeled process and by another process
of unknown probability density function (PDF), a desired feature when
manipulating the fraction of the unknown process (either for enhancing it or
suppressing it) consists in avoiding to modify the kinematic distributions of
the well modeled one. A bootstrap technique is used to identify sub-samples
rich in the well modeled process, and classify each event according to the
frequency of it being part of such sub-samples. Comparisons with general MVA
algorithms will be shown, as well as a study of the asymptotic properties of
the method, making use of a public domain data set that models a typical search
for new physics as performed at hadronic colliders such as the Large Hadron
Collider (LHC).Comment: 8 pages, 5 figures. Proceedings of the XIIth Quark Confinement and
Hadron Spectrum conference, 28/8-2/9 2016, Thessaloniki, Greec
- …