961 research outputs found
Heartbeat Anomaly Detection using Adversarial Oversampling
Cardiovascular diseases are one of the most common causes of death in the
world. Prevention, knowledge of previous cases in the family, and early
detection is the best strategy to reduce this fact. Different machine learning
approaches to automatic diagnostic are being proposed to this task. As in most
health problems, the imbalance between examples and classes is predominant in
this problem and affects the performance of the automated solution. In this
paper, we address the classification of heartbeats images in different
cardiovascular diseases. We propose a two-dimensional Convolutional Neural
Network for classification after using a InfoGAN architecture for generating
synthetic images to unbalanced classes. We call this proposal Adversarial
Oversampling and compare it with the classical oversampling methods as SMOTE,
ADASYN, and RandomOversampling. The results show that the proposed approach
improves the classifier performance for the minority classes without harming
the performance in the balanced classes
Autoencoders and Generative Adversarial Networks for Imbalanced Sequence Classification
Generative Adversarial Networks (GANs) have been used in many different
applications to generate realistic synthetic data. We introduce a novel GAN
with Autoencoder (GAN-AE) architecture to generate synthetic samples for
variable length, multi-feature sequence datasets. In this model, we develop a
GAN architecture with an additional autoencoder component, where recurrent
neural networks (RNNs) are used for each component of the model in order to
generate synthetic data to improve classification accuracy for a highly
imbalanced medical device dataset. In addition to the medical device dataset,
we also evaluate the GAN-AE performance on two additional datasets and
demonstrate the application of GAN-AE to a sequence-to-sequence task where both
synthetic sequence inputs and sequence outputs must be generated. To evaluate
the quality of the synthetic data, we train encoder-decoder models both with
and without the synthetic data and compare the classification model
performance. We show that a model trained with GAN-AE generated synthetic data
outperforms models trained with synthetic data generated both with standard
oversampling techniques such as SMOTE and Autoencoders as well as with state of
the art GAN-based models
DOPING: Generative Data Augmentation for Unsupervised Anomaly Detection with GAN
Recently, the introduction of the generative adversarial network (GAN) and
its variants has enabled the generation of realistic synthetic samples, which
has been used for enlarging training sets. Previous work primarily focused on
data augmentation for semi-supervised and supervised tasks. In this paper, we
instead focus on unsupervised anomaly detection and propose a novel generative
data augmentation framework optimized for this task. In particular, we propose
to oversample infrequent normal samples - normal samples that occur with small
probability, e.g., rare normal events. We show that these samples are
responsible for false positives in anomaly detection. However, oversampling of
infrequent normal samples is challenging for real-world high-dimensional data
with multimodal distributions. To address this challenge, we propose to use a
GAN variant known as the adversarial autoencoder (AAE) to transform the
high-dimensional multimodal data distributions into low-dimensional unimodal
latent distributions with well-defined tail probability. Then, we
systematically oversample at the `edge' of the latent distributions to increase
the density of infrequent normal samples. We show that our oversampling
pipeline is a unified one: it is generally applicable to datasets with
different complex data distributions. To the best of our knowledge, our method
is the first data augmentation technique focused on improving performance in
unsupervised anomaly detection. We validate our method by demonstrating
consistent improvements across several real-world datasets.Comment: Published as a conference paper at ICDM 2018 (IEEE International
Conference on Data Mining
A novel generative adversarial networks modelling for the class imbalance problem in high dimensional omics data
Class imbalance remains a large problem in high-throughput omics analyses, causing bias towards the over-represented class when training machine learning-based classifiers. Oversampling is a common method used to balance classes, allowing for better generalization of the training data. More naive approaches can introduce other biases into the data, being especially sensitive to inaccuracies in the training data, a problem considering the characteristically noisy data obtained in healthcare. This is especially a problem with high-dimensional data. A generative adversarial network-based method is proposed for creating synthetic samples from small, high-dimensional data, to improve upon other more naive generative approaches. The method was compared with ‘synthetic minority over-sampling technique’ (SMOTE) and ‘random oversampling’ (RO). Generative methods were validated by training classifiers on the balanced data
Generative Adversarial Networks Selection Approach for Extremely Imbalanced Fault Diagnosis of Reciprocating Machinery
At present, countless approaches to fault diagnosis in reciprocating machines have been proposed, all considering that the available machinery dataset is in equal proportions for all conditions. However, when the application is closer to reality, the problem of data imbalance is increasingly evident. In this paper, we propose a method for the creation of diagnoses that consider an extreme imbalance in the available data. Our approach first processes the vibration signals of the machine using a wavelet packet transform-based feature-extraction stage. Then, improved generative models are obtained with a dissimilarity-based model selection to artificially balance the dataset. Finally, a Random Forest classifier is created to address the diagnostic task. This methodology provides a considerable improvement with 99% of data imbalance over other approaches reported in the literature, showing performance similar to that obtained with a balanced set of data.National Natural Science Foundation of China, under Grant 51605406National Natural Science Foundation of China under Grant 7180104
Minority Class Oversampling for Tabular Data with Deep Generative Models
In practice, machine learning experts are often confronted with imbalanced
data. Without accounting for the imbalance, common classifiers perform poorly
and standard evaluation metrics mislead the practitioners on the model's
performance. A common method to treat imbalanced datasets is under- and
oversampling. In this process, samples are either removed from the majority
class or synthetic samples are added to the minority class. In this paper, we
follow up on recent developments in deep learning. We take proposals of deep
generative models, including our own, and study the ability of these approaches
to provide realistic samples that improve performance on imbalanced
classification tasks via oversampling.
Across 160K+ experiments, we show that all of the new methods tend to perform
better than simple baseline methods such as SMOTE, but require different under-
and oversampling ratios to do so. Our experiments show that the way the method
of sampling does not affect quality, but runtime varies widely. We also observe
that the improvements in terms of performance metric, while shown to be
significant when ranking the methods, often are minor in absolute terms,
especially compared to the required effort. Furthermore, we notice that a large
part of the improvement is due to undersampling, not oversampling. We make our
code and testing framework available
- …