1 research outputs found
FaR-GAN for One-Shot Face Reenactment
Animating a static face image with target facial expressions and movements is
important in the area of image editing and movie production. This face
reenactment process is challenging due to the complex geometry and movement of
human faces. Previous work usually requires a large set of images from the same
person to model the appearance. In this paper, we present a one-shot face
reenactment model, FaR-GAN, that takes only one face image of any given source
identity and a target expression as input, and then produces a face image of
the same source identity but with the target expression. The proposed method
makes no assumptions about the source identity, facial expression, head pose,
or even image background. We evaluate our method on the VoxCeleb1 dataset and
show that our method is able to generate a higher quality face image than the
compared methods.Comment: This paper has been accepted to the AI for content creation workshop
at CVPR 202