29 research outputs found
Hierarchical Cross-Modal Talking Face Generationwith Dynamic Pixel-Wise Loss
We devise a cascade GAN approach to generate talking face video, which is
robust to different face shapes, view angles, facial characteristics, and noisy
audio conditions. Instead of learning a direct mapping from audio to video
frames, we propose first to transfer audio to high-level structure, i.e., the
facial landmarks, and then to generate video frames conditioned on the
landmarks. Compared to a direct audio-to-image approach, our cascade approach
avoids fitting spurious correlations between audiovisual signals that are
irrelevant to the speech content. We, humans, are sensitive to temporal
discontinuities and subtle artifacts in video. To avoid those pixel jittering
problems and to enforce the network to focus on audiovisual-correlated regions,
we propose a novel dynamically adjustable pixel-wise loss with an attention
mechanism. Furthermore, to generate a sharper image with well-synchronized
facial movements, we propose a novel regression-based discriminator structure,
which considers sequence-level information along with frame-level information.
Thoughtful experiments on several datasets and real-world samples demonstrate
significantly better results obtained by our method than the state-of-the-art
methods in both quantitative and qualitative comparisons
Emotional Talking Head Generation based on Memory-Sharing and Attention-Augmented Networks
Given an audio clip and a reference face image, the goal of the talking head
generation is to generate a high-fidelity talking head video. Although some
audio-driven methods of generating talking head videos have made some
achievements in the past, most of them only focused on lip and audio
synchronization and lack the ability to reproduce the facial expressions of the
target person. To this end, we propose a talking head generation model
consisting of a Memory-Sharing Emotion Feature extractor (MSEF) and an
Attention-Augmented Translator based on U-net (AATU). Firstly, MSEF can extract
implicit emotional auxiliary features from audio to estimate more accurate
emotional face landmarks.~Secondly, AATU acts as a translator between the
estimated landmarks and the photo-realistic video frames. Extensive qualitative
and quantitative experiments have shown the superiority of the proposed method
to the previous works. Codes will be made publicly available
Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers
Previous studies have explored generating accurately lip-synced talking faces
for arbitrary targets given audio conditions. However, most of them deform or
generate the whole facial area, leading to non-realistic results. In this work,
we delve into the formulation of altering only the mouth shapes of the target
person. This requires masking a large percentage of the original image and
seamlessly inpainting it with the aid of audio and reference frames. To this
end, we propose the Audio-Visual Context-Aware Transformer (AV-CAT) framework,
which produces accurate lip-sync with photo-realistic quality by predicting the
masked mouth shapes. Our key insight is to exploit desired contextual
information provided in audio and visual modalities thoroughly with delicately
designed Transformers. Specifically, we propose a convolution-Transformer
hybrid backbone and design an attention-based fusion strategy for filling the
masked parts. It uniformly attends to the textural information on the unmasked
regions and the reference frame. Then the semantic audio information is
involved in enhancing the self-attention computation. Additionally, a
refinement network with audio injection improves both image and lip-sync
quality. Extensive experiments validate that our model can generate
high-fidelity lip-synced results for arbitrary subjects.Comment: Accepted to SIGGRAPH Asia 2022 (Conference Proceedings). Project
page: https://hangz-nju-cuhk.github.io/projects/AV-CA