32 research outputs found
Memories are One-to-Many Mapping Alleviators in Talking Face Generation
Talking face generation aims at generating photo-realistic video portraits of
a target person driven by input audio. Due to its nature of one-to-many mapping
from the input audio to the output video (e.g., one speech content may have
multiple feasible visual appearances), learning a deterministic mapping like
previous works brings ambiguity during training, and thus causes inferior
visual results. Although this one-to-many mapping could be alleviated in part
by a two-stage framework (i.e., an audio-to-expression model followed by a
neural-rendering model), it is still insufficient since the prediction is
produced without enough information (e.g., emotions, wrinkles, etc.). In this
paper, we propose MemFace to complement the missing information with an
implicit memory and an explicit memory that follow the sense of the two stages
respectively. More specifically, the implicit memory is employed in the
audio-to-expression model to capture high-level semantics in the
audio-expression shared space, while the explicit memory is employed in the
neural-rendering model to help synthesize pixel-level details. Our experimental
results show that our proposed MemFace surpasses all the state-of-the-art
results across multiple scenarios consistently and significantly.Comment: Project page: see https://memoryface.github.i