5,756 research outputs found
Adversarial Unsupervised Representation Learning for Activity Time-Series
Sufficient physical activity and restful sleep play a major role in the
prevention and cure of many chronic conditions. Being able to proactively
screen and monitor such chronic conditions would be a big step forward for
overall health. The rapid increase in the popularity of wearable devices
provides a significant new source, making it possible to track the user's
lifestyle real-time. In this paper, we propose a novel unsupervised
representation learning technique called activity2vec that learns and
"summarizes" the discrete-valued activity time-series. It learns the
representations with three components: (i) the co-occurrence and magnitude of
the activity levels in a time-segment, (ii) neighboring context of the
time-segment, and (iii) promoting subject-invariance with adversarial training.
We evaluate our method on four disorder prediction tasks using linear
classifiers. Empirical evaluation demonstrates that our proposed method scales
and performs better than many strong baselines. The adversarial regime helps
improve the generalizability of our representations by promoting subject
invariant features. We also show that using the representations at the level of
a day works the best since human activity is structured in terms of daily
routinesComment: Accepted at AAAI'19. arXiv admin note: text overlap with
arXiv:1712.0952
Disentangling Factors of Variation by Mixing Them
We propose an approach to learn image representations that consist of
disentangled factors of variation without exploiting any manual labeling or
data domain knowledge. A factor of variation corresponds to an image attribute
that can be discerned consistently across a set of images, such as the pose or
color of objects. Our disentangled representation consists of a concatenation
of feature chunks, each chunk representing a factor of variation. It supports
applications such as transferring attributes from one image to another, by
simply mixing and unmixing feature chunks, and classification or retrieval
based on one or several attributes, by considering a user-specified subset of
feature chunks. We learn our representation without any labeling or knowledge
of the data domain, using an autoencoder architecture with two novel training
objectives: first, we propose an invariance objective to encourage that
encoding of each attribute, and decoding of each chunk, are invariant to
changes in other attributes and chunks, respectively; second, we include a
classification objective, which ensures that each chunk corresponds to a
consistently discernible attribute in the represented image, hence avoiding
degenerate feature mappings where some chunks are completely ignored. We
demonstrate the effectiveness of our approach on the MNIST, Sprites, and CelebA
datasets.Comment: CVPR 201
Fader Networks: Manipulating Images by Sliding Attributes
This paper introduces a new encoder-decoder architecture that is trained to
reconstruct images by disentangling the salient information of the image and
the values of attributes directly in the latent space. As a result, after
training, our model can generate different realistic versions of an input image
by varying the attribute values. By using continuous attribute values, we can
choose how much a specific attribute is perceivable in the generated image.
This property could allow for applications where users can modify an image
using sliding knobs, like faders on a mixing console, to change the facial
expression of a portrait, or to update the color of some objects. Compared to
the state-of-the-art which mostly relies on training adversarial networks in
pixel space by altering attribute values at train time, our approach results in
much simpler training schemes and nicely scales to multiple attributes. We
present evidence that our model can significantly change the perceived value of
the attributes while preserving the naturalness of images.Comment: NIPS 201
MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation
How to effectively learn from unlabeled data from the target domain is
crucial for domain adaptation, as it helps reduce the large performance gap due
to domain shift or distribution change. In this paper, we propose an
easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on
adversarial learning. Unlike most existing approaches which employ a generator
to deal with domain difference, MMEN focuses on learning the categorical
information from unlabeled target samples with the help of labeled source
samples. Specifically, we set an unfair multi-class classifier named
categorical discriminator, which classifies source samples accurately but be
confused about the categories of target samples. The generator learns a common
subspace that aligns the unlabeled samples based on the target pseudo-labels.
For MMEN, we also provide theoretical explanations to show that the learning of
feature alignment reduces domain mismatch at the category level. Experimental
results on various benchmark datasets demonstrate the effectiveness of our
method over existing state-of-the-art baselines.Comment: 8 pages, 6 figure
- …