3,929 research outputs found
Group invariance principles for causal generative models
The postulate of independence of cause and mechanism (ICM) has recently led
to several new causal discovery algorithms. The interpretation of independence
and the way it is utilized, however, varies across these methods. Our aim in
this paper is to propose a group theoretic framework for ICM to unify and
generalize these approaches. In our setting, the cause-mechanism relationship
is assessed by comparing it against a null hypothesis through the application
of random generic group transformations. We show that the group theoretic view
provides a very general tool to study the structure of data generating
mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure
Crossing Generative Adversarial Networks for Cross-View Person Re-identification
Person re-identification (\textit{re-id}) refers to matching pedestrians
across disjoint yet non-overlapping camera views. The most effective way to
match these pedestrians undertaking significant visual variations is to seek
reliably invariant features that can describe the person of interest
faithfully. Most of existing methods are presented in a supervised manner to
produce discriminative features by relying on labeled paired images in
correspondence. However, annotating pair-wise images is prohibitively expensive
in labors, and thus not practical in large-scale networked cameras. Moreover,
seeking comparable representations across camera views demands a flexible model
to address the complex distributions of images. In this work, we study the
co-occurrence statistic patterns between pairs of images, and propose to
crossing Generative Adversarial Network (Cross-GAN) for learning a joint
distribution for cross-image representations in a unsupervised manner. Given a
pair of person images, the proposed model consists of the variational
auto-encoder to encode the pair into respective latent variables, a proposed
cross-view alignment to reduce the view disparity, and an adversarial layer to
seek the joint distribution of latent representations. The learned latent
representations are well-aligned to reflect the co-occurrence patterns of
paired images. We empirically evaluate the proposed model against challenging
datasets, and our results show the importance of joint invariant features in
improving matching rates of person re-id with comparison to semi/unsupervised
state-of-the-arts.Comment: 12 pages. arXiv admin note: text overlap with arXiv:1702.03431 by
other author
A Style-Based Generator Architecture for Generative Adversarial Networks
We propose an alternative generator architecture for generative adversarial
networks, borrowing from style transfer literature. The new architecture leads
to an automatically learned, unsupervised separation of high-level attributes
(e.g., pose and identity when trained on human faces) and stochastic variation
in the generated images (e.g., freckles, hair), and it enables intuitive,
scale-specific control of the synthesis. The new generator improves the
state-of-the-art in terms of traditional distribution quality metrics, leads to
demonstrably better interpolation properties, and also better disentangles the
latent factors of variation. To quantify interpolation quality and
disentanglement, we propose two new, automated methods that are applicable to
any generator architecture. Finally, we introduce a new, highly varied and
high-quality dataset of human faces.Comment: CVPR 2019 final versio
Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks
The effectiveness of biosignal generation and data augmentation with
biosignal generative models based on generative adversarial networks (GANs),
which are a type of deep learning technique, was demonstrated in our previous
paper. GAN-based generative models only learn the projection between a random
distribution as input data and the distribution of training data.Therefore, the
relationship between input and generated data is unclear, and the
characteristics of the data generated from this model cannot be controlled.
This study proposes a method for generating time-series data based on GANs and
explores their ability to generate biosignals with certain classes and
characteristics. Moreover, in the proposed method, latent variables are
analyzed using canonical correlation analysis (CCA) to represent the
relationship between input and generated data as canonical loadings. Using
these loadings, we can control the characteristics of the data generated by the
proposed method. The influence of class labels on generated data is analyzed by
feeding the data interpolated between two class labels into the generator of
the proposed GANs. The CCA of the latent variables is shown to be an effective
method of controlling the generated data characteristics. We are able to model
the distribution of the time-series data without requiring domain-dependent
knowledge using the proposed method. Furthermore, it is possible to control the
characteristics of these data by analyzing the model trained using the proposed
method. To the best of our knowledge, this work is the first to generate
biosignals using GANs while controlling the characteristics of the generated
data
- …