25,812 research outputs found
Age Progression and Regression with Spatial Attention Modules
Age progression and regression refers to aesthetically render-ing a given
face image to present effects of face aging and rejuvenation, respectively.
Although numerous studies have been conducted in this topic, there are two
major problems: 1) multiple models are usually trained to simulate different
age mappings, and 2) the photo-realism of generated face images is heavily
influenced by the variation of training images in terms of pose, illumination,
and background. To address these issues, in this paper, we propose a framework
based on conditional Generative Adversarial Networks (cGANs) to achieve age
progression and regression simultaneously. Particularly, since face aging and
rejuvenation are largely different in terms of image translation patterns, we
model these two processes using two separate generators, each dedicated to one
age changing process. In addition, we exploit spatial attention mechanisms to
limit image modifications to regions closely related to age changes, so that
images with high visual fidelity could be synthesized for in-the-wild cases.
Experiments on multiple datasets demonstrate the ability of our model in
synthesizing lifelike face images at desired ages with personalized features
well preserved, and keeping age-irrelevant regions unchanged
Learning Face Age Progression: A Pyramid Architecture of GANs
The two underlying requirements of face age progression, i.e. aging accuracy
and identity permanence, are not well studied in the literature. In this paper,
we present a novel generative adversarial network based approach. It separately
models the constraints for the intrinsic subject-specific characteristics and
the age-specific facial changes with respect to the elapsed time, ensuring that
the generated faces present desired aging effects while simultaneously keeping
personalized properties stable. Further, to generate more lifelike facial
details, high-level age-specific features conveyed by the synthesized face are
estimated by a pyramidal adversarial discriminator at multiple scales, which
simulates the aging effects in a finer manner. The proposed method is
applicable to diverse face samples in the presence of variations in pose,
expression, makeup, etc., and remarkably vivid aging effects are achieved. Both
visual fidelity and quantitative evaluations show that the approach advances
the state-of-the-art.Comment: CVPR 2018. V4 and V2 are the same, i.e. the conference version; V3 is
a related but different work, which is mistakenly submitted and will be
submitted as a new arXiv pape
Neural Face Editing with Intrinsic Image Disentangling
Traditional face editing methods often require a number of sophisticated and
task specific algorithms to be applied one after the other --- a process that
is tedious, fragile, and computationally intensive. In this paper, we propose
an end-to-end generative adversarial network that infers a face-specific
disentangled representation of intrinsic face properties, including shape (i.e.
normals), albedo, and lighting, and an alpha matte. We show that this network
can be trained on "in-the-wild" images by incorporating an in-network
physically-based image formation module and appropriate loss functions. Our
disentangling latent representation allows for semantically relevant edits,
where one aspect of facial appearance can be manipulated while keeping
orthogonal properties fixed, and we demonstrate its use for a number of facial
editing applications.Comment: CVPR 2017 ora
- …