6,469 research outputs found
Attribute-Guided Face Generation Using Conditional CycleGAN
We are interested in attribute-guided face generation: given a low-res face
input image, an attribute vector that can be extracted from a high-res image
(attribute image), our new method generates a high-res face image for the
low-res input that satisfies the given attributes. To address this problem, we
condition the CycleGAN and propose conditional CycleGAN, which is designed to
1) handle unpaired training data because the training low/high-res and high-res
attribute images may not necessarily align with each other, and to 2) allow
easy control of the appearance of the generated face via the input attributes.
We demonstrate impressive results on the attribute-guided conditional CycleGAN,
which can synthesize realistic face images with appearance easily controlled by
user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using
the attribute image as identity to produce the corresponding conditional vector
and by incorporating a face verification network, the attribute-guided network
becomes the identity-guided conditional CycleGAN which produces impressive and
interesting results on identity transfer. We demonstrate three applications on
identity-guided conditional CycleGAN: identity-preserving face superresolution,
face swapping, and frontal face generation, which consistently show the
advantage of our new method.Comment: ECCV 201
VIGAN: Missing View Imputation with Generative Adversarial Networks
In an era when big data are becoming the norm, there is less concern with the
quantity but more with the quality and completeness of the data. In many
disciplines, data are collected from heterogeneous sources, resulting in
multi-view or multi-modal datasets. The missing data problem has been
challenging to address in multi-view data analysis. Especially, when certain
samples miss an entire view of data, it creates the missing view problem.
Classic multiple imputations or matrix completion methods are hardly effective
here when no information can be based on in the specific view to impute data
for such samples. The commonly-used simple method of removing samples with a
missing view can dramatically reduce sample size, thus diminishing the
statistical power of a subsequent analysis. In this paper, we propose a novel
approach for view imputation via generative adversarial networks (GANs), which
we name by VIGAN. This approach first treats each view as a separate domain and
identifies domain-to-domain mappings via a GAN using randomly-sampled data from
each view, and then employs a multi-modal denoising autoencoder (DAE) to
reconstruct the missing view from the GAN outputs based on paired data across
the views. Then, by optimizing the GAN and DAE jointly, our model enables the
knowledge integration for domain mappings and view correspondences to
effectively recover the missing view. Empirical results on benchmark datasets
validate the VIGAN approach by comparing against the state of the art. The
evaluation of VIGAN in a genetic study of substance use disorders further
proves the effectiveness and usability of this approach in life science.Comment: 10 pages, 8 figures, conferenc
- …
