1,325 research outputs found
DOPING: Generative Data Augmentation for Unsupervised Anomaly Detection with GAN
Recently, the introduction of the generative adversarial network (GAN) and
its variants has enabled the generation of realistic synthetic samples, which
has been used for enlarging training sets. Previous work primarily focused on
data augmentation for semi-supervised and supervised tasks. In this paper, we
instead focus on unsupervised anomaly detection and propose a novel generative
data augmentation framework optimized for this task. In particular, we propose
to oversample infrequent normal samples - normal samples that occur with small
probability, e.g., rare normal events. We show that these samples are
responsible for false positives in anomaly detection. However, oversampling of
infrequent normal samples is challenging for real-world high-dimensional data
with multimodal distributions. To address this challenge, we propose to use a
GAN variant known as the adversarial autoencoder (AAE) to transform the
high-dimensional multimodal data distributions into low-dimensional unimodal
latent distributions with well-defined tail probability. Then, we
systematically oversample at the `edge' of the latent distributions to increase
the density of infrequent normal samples. We show that our oversampling
pipeline is a unified one: it is generally applicable to datasets with
different complex data distributions. To the best of our knowledge, our method
is the first data augmentation technique focused on improving performance in
unsupervised anomaly detection. We validate our method by demonstrating
consistent improvements across several real-world datasets.Comment: Published as a conference paper at ICDM 2018 (IEEE International
Conference on Data Mining
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
A Fully Convolutional Tri-branch Network (FCTN) for Domain Adaptation
A domain adaptation method for urban scene segmentation is proposed in this
work. We develop a fully convolutional tri-branch network, where two branches
assign pseudo labels to images in the unlabeled target domain while the third
branch is trained with supervision based on images in the pseudo-labeled target
domain. The re-labeling and re-training processes alternate. With this design,
the tri-branch network learns target-specific discriminative representations
progressively and, as a result, the cross-domain capability of the segmenter
improves. We evaluate the proposed network on large-scale domain adaptation
experiments using both synthetic (GTA) and real (Cityscapes) images. It is
shown that our solution achieves the state-of-the-art performance and it
outperforms previous methods by a significant margin.Comment: Accepted by ICASSP 201
TiDAL: Learning Training Dynamics for Active Learning
Active learning (AL) aims to select the most useful data samples from an
unlabeled data pool and annotate them to expand the labeled dataset under a
limited budget. Especially, uncertainty-based methods choose the most uncertain
samples, which are known to be effective in improving model performance.
However, AL literature often overlooks training dynamics (TD), defined as the
ever-changing model behavior during optimization via stochastic gradient
descent, even though other areas of literature have empirically shown that TD
provides important clues for measuring the sample uncertainty. In this paper,
we propose a novel AL method, Training Dynamics for Active Learning (TiDAL),
which leverages the TD to quantify uncertainties of unlabeled data. Since
tracking the TD of all the large-scale unlabeled data is impractical, TiDAL
utilizes an additional prediction module that learns the TD of labeled data. To
further justify the design of TiDAL, we provide theoretical and empirical
evidence to argue the usefulness of leveraging TD for AL. Experimental results
show that our TiDAL achieves better or comparable performance on both balanced
and imbalanced benchmark datasets compared to state-of-the-art AL methods,
which estimate data uncertainty using only static information after model
training.Comment: ICCV 2023 Camera-Read
- …