615 research outputs found
GAN Augmented Text Anomaly Detection with Sequences of Deep Statistics
Anomaly detection is the process of finding data points that deviate from a
baseline. In a real-life setting, anomalies are usually unknown or extremely
rare. Moreover, the detection must be accomplished in a timely manner or the
risk of corrupting the system might grow exponentially. In this work, we
propose a two level framework for detecting anomalies in sequences of discrete
elements. First, we assess whether we can obtain enough information from the
statistics collected from the discriminator's layers to discriminate between
out of distribution and in distribution samples. We then build an unsupervised
anomaly detection module based on these statistics. As to augment the data and
keep track of classes of known data, we lean toward a semi-supervised
adversarial learning applied to discrete elements.Comment: 5 pages, 53rd Annual Conference on Information Sciences and Systems,
CISS 201
DOPING: Generative Data Augmentation for Unsupervised Anomaly Detection with GAN
Recently, the introduction of the generative adversarial network (GAN) and
its variants has enabled the generation of realistic synthetic samples, which
has been used for enlarging training sets. Previous work primarily focused on
data augmentation for semi-supervised and supervised tasks. In this paper, we
instead focus on unsupervised anomaly detection and propose a novel generative
data augmentation framework optimized for this task. In particular, we propose
to oversample infrequent normal samples - normal samples that occur with small
probability, e.g., rare normal events. We show that these samples are
responsible for false positives in anomaly detection. However, oversampling of
infrequent normal samples is challenging for real-world high-dimensional data
with multimodal distributions. To address this challenge, we propose to use a
GAN variant known as the adversarial autoencoder (AAE) to transform the
high-dimensional multimodal data distributions into low-dimensional unimodal
latent distributions with well-defined tail probability. Then, we
systematically oversample at the `edge' of the latent distributions to increase
the density of infrequent normal samples. We show that our oversampling
pipeline is a unified one: it is generally applicable to datasets with
different complex data distributions. To the best of our knowledge, our method
is the first data augmentation technique focused on improving performance in
unsupervised anomaly detection. We validate our method by demonstrating
consistent improvements across several real-world datasets.Comment: Published as a conference paper at ICDM 2018 (IEEE International
Conference on Data Mining
- …