5 research outputs found
Learning Flow-based Feature Warping for Face Frontalization with Illumination Inconsistent Supervision
Despite recent advances in deep learning-based face frontalization methods,
photo-realistic and illumination preserving frontal face synthesis is still
challenging due to large pose and illumination discrepancy during training. We
propose a novel Flow-based Feature Warping Model (FFWM) which can learn to
synthesize photo-realistic and illumination preserving frontal images with
illumination inconsistent supervision. Specifically, an Illumination Preserving
Module (IPM) is proposed to learn illumination preserving image synthesis from
illumination inconsistent image pairs. IPM includes two pathways which
collaborate to ensure the synthesized frontal images are illumination
preserving and with fine details. Moreover, a Warp Attention Module (WAM) is
introduced to reduce the pose discrepancy in the feature level, and hence to
synthesize frontal images more effectively and preserve more details of profile
images. The attention mechanism in WAM helps reduce the artifacts caused by the
displacements between the profile and the frontal images. Quantitative and
qualitative experimental results show that our FFWM can synthesize
photo-realistic and illumination preserving frontal images and performs
favorably against the state-of-the-art results.Comment: ECCV 2020. Code is available at: https://github.com/csyxwei/FFW
Pixel Sampling for Style Preserving Face Pose Editing
The existing auto-encoder based face pose editing methods primarily focus on
modeling the identity preserving ability during pose synthesis, but are less
able to preserve the image style properly, which refers to the color,
brightness, saturation, etc. In this paper, we take advantage of the well-known
frontal/profile optical illusion and present a novel two-stage approach to
solve the aforementioned dilemma, where the task of face pose manipulation is
cast into face inpainting. By selectively sampling pixels from the input face
and slightly adjust their relative locations with the proposed ``Pixel
Attention Sampling" module, the face editing result faithfully keeps the
identity information as well as the image style unchanged. By leveraging
high-dimensional embedding at the inpainting stage, finer details are
generated. Further, with the 3D facial landmarks as guidance, our method is
able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and
roll, resulting in more flexible face pose editing than merely controlling the
yaw angle as usually achieved by the current state-of-the-art. Both the
qualitative and quantitative evaluations validate the superiority of the
proposed approach