127 research outputs found
Synthesizing Normalized Faces from Facial Identity Features
We present a method for synthesizing a frontal, neutral-expression image of a
person's face given an input face photograph. This is achieved by learning to
generate facial landmarks and textures from features extracted from a
facial-recognition network. Unlike previous approaches, our encoding feature
vector is largely invariant to lighting, pose, and facial expression.
Exploiting this invariance, we train our decoder network using only frontal,
neutral-expression photographs. Since these photographs are well aligned, we
can decompose them into a sparse set of landmark points and aligned texture
maps. The decoder then predicts landmarks and textures independently and
combines them using a differentiable image warping operation. The resulting
images can be used for a number of applications, such as analyzing facial
attributes, exposure and white balance adjustment, or creating a 3-D avatar
Learning Flow-based Feature Warping for Face Frontalization with Illumination Inconsistent Supervision
Despite recent advances in deep learning-based face frontalization methods,
photo-realistic and illumination preserving frontal face synthesis is still
challenging due to large pose and illumination discrepancy during training. We
propose a novel Flow-based Feature Warping Model (FFWM) which can learn to
synthesize photo-realistic and illumination preserving frontal images with
illumination inconsistent supervision. Specifically, an Illumination Preserving
Module (IPM) is proposed to learn illumination preserving image synthesis from
illumination inconsistent image pairs. IPM includes two pathways which
collaborate to ensure the synthesized frontal images are illumination
preserving and with fine details. Moreover, a Warp Attention Module (WAM) is
introduced to reduce the pose discrepancy in the feature level, and hence to
synthesize frontal images more effectively and preserve more details of profile
images. The attention mechanism in WAM helps reduce the artifacts caused by the
displacements between the profile and the frontal images. Quantitative and
qualitative experimental results show that our FFWM can synthesize
photo-realistic and illumination preserving frontal images and performs
favorably against the state-of-the-art results.Comment: ECCV 2020. Code is available at: https://github.com/csyxwei/FFW
- …