10,578 research outputs found
Oversampling for Imbalanced Learning Based on K-Means and SMOTE
Learning from class-imbalanced data continues to be a common and challenging
problem in supervised learning as standard classification algorithms are
designed to handle balanced class distributions. While different strategies
exist to tackle this problem, methods which generate artificial data to achieve
a balanced class distribution are more versatile than modifications to the
classification algorithm. Such techniques, called oversamplers, modify the
training data, allowing any classifier to be used with class-imbalanced
datasets. Many algorithms have been proposed for this task, but most are
complex and tend to generate unnecessary noise. This work presents a simple and
effective oversampling method based on k-means clustering and SMOTE
oversampling, which avoids the generation of noise and effectively overcomes
imbalances between and within classes. Empirical results of extensive
experiments with 71 datasets show that training data oversampled with the
proposed method improves classification results. Moreover, k-means SMOTE
consistently outperforms other popular oversampling methods. An implementation
is made available in the python programming language.Comment: 19 pages, 8 figure
OL\'E: Orthogonal Low-rank Embedding, A Plug and Play Geometric Loss for Deep Learning
Deep neural networks trained using a softmax layer at the top and the
cross-entropy loss are ubiquitous tools for image classification. Yet, this
does not naturally enforce intra-class similarity nor inter-class margin of the
learned deep representations. To simultaneously achieve these two goals,
different solutions have been proposed in the literature, such as the pairwise
or triplet losses. However, such solutions carry the extra task of selecting
pairs or triplets, and the extra computational burden of computing and learning
for many combinations of them. In this paper, we propose a plug-and-play loss
term for deep networks that explicitly reduces intra-class variance and
enforces inter-class margin simultaneously, in a simple and elegant geometric
manner. For each class, the deep features are collapsed into a learned linear
subspace, or union of them, and inter-class subspaces are pushed to be as
orthogonal as possible. Our proposed Orthogonal Low-rank Embedding (OL\'E) does
not require carefully crafting pairs or triplets of samples for training, and
works standalone as a classification loss, being the first reported deep metric
learning framework of its kind. Because of the improved margin between features
of different classes, the resulting deep networks generalize better, are more
discriminative, and more robust. We demonstrate improved classification
performance in general object recognition, plugging the proposed loss term into
existing off-the-shelf architectures. In particular, we show the advantage of
the proposed loss in the small data/model scenario, and we significantly
advance the state-of-the-art on the Stanford STL-10 benchmark
A False Acceptance Error Controlling Method for Hyperspherical Classifiers
Controlling false acceptance errors is of critical importance in many pattern recognition applications, including signature and speaker verification problems. Toward this goal, this paper presents two post-processing methods to improve the performance of hyperspherical classifiers in rejecting patterns from unknown classes. The first method uses a self-organizational approach to design minimum radius hyperspheres, reducing the redundancy of the class region defined by the hyperspherical classifiers. The second method removes additional redundant class regions from the hyperspheres by using a clustering technique to generate a number of smaller hyperspheres. Simulation and experimental results demonstrate that by removing redundant regions these two post-processing methods can reduce the false acceptance error without significantly increasing the false rejection error
Scalable Solutions for Automated Single Pulse Identification and Classification in Radio Astronomy
Data collection for scientific applications is increasing exponentially and
is forecasted to soon reach peta- and exabyte scales. Applications which
process and analyze scientific data must be scalable and focus on execution
performance to keep pace. In the field of radio astronomy, in addition to
increasingly large datasets, tasks such as the identification of transient
radio signals from extrasolar sources are computationally expensive. We present
a scalable approach to radio pulsar detection written in Scala that
parallelizes candidate identification to take advantage of in-memory task
processing using Apache Spark on a YARN distributed system. Furthermore, we
introduce a novel automated multiclass supervised machine learning technique
that we combine with feature selection to reduce the time required for
candidate classification. Experimental testing on a Beowulf cluster with 15
data nodes shows that the parallel implementation of the identification
algorithm offers a speedup of up to 5X that of a similar multithreaded
implementation. Further, we show that the combination of automated multiclass
classification and feature selection speeds up the execution performance of the
RandomForest machine learning algorithm by an average of 54% with less than a
2% average reduction in the algorithm's ability to correctly classify pulsars.
The generalizability of these results is demonstrated by using two real-world
radio astronomy data sets.Comment: In Proceedings of the 47th International Conference on Parallel
Processing (ICPP 2018). ACM, New York, NY, USA, Article 11, 11 page
- …