92 research outputs found

    Audio-Driven Talking Face Generation with Diverse yet Realistic Facial Animations

    Full text link
    Audio-driven talking face generation, which aims to synthesize talking faces with realistic facial animations (including accurate lip movements, vivid facial expression details and natural head poses) corresponding to the audio, has achieved rapid progress in recent years. However, most existing work focuses on generating lip movements only without handling the closely correlated facial expressions, which degrades the realism of the generated faces greatly. This paper presents DIRFA, a novel method that can generate talking faces with diverse yet realistic facial animations from the same driving audio. To accommodate fair variation of plausible facial animations for the same audio, we design a transformer-based probabilistic mapping network that can model the variational facial animation distribution conditioned upon the input audio and autoregressively convert the audio signals into a facial animation sequence. In addition, we introduce a temporally-biased mask into the mapping network, which allows to model the temporal dependency of facial animations and produce temporally smooth facial animation sequence. With the generated facial animation sequence and a source image, photo-realistic talking faces can be synthesized with a generic generation network. Extensive experiments show that DIRFA can generate talking faces with realistic facial animations effectively

    POCE: Pose-Controllable Expression Editing

    Full text link
    Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations (unrealistic editing) or require paired training data (not easy to collect) for pose controls. This paper presents POCE, an innovative pose-controllable expression editing network that can generate realistic facial expressions and head poses simultaneously with just unpaired training images. POCE achieves the more accessible and realistic pose-controllable expression editing by mapping face images into UV space, where facial expressions and head poses can be disentangled and edited separately. POCE has two novel designs. The first is self-supervised UV completion that allows to complete UV maps sampled under different head poses, which often suffer from self-occlusions and missing facial texture. The second is weakly-supervised UV editing that allows to generate new facial expressions with minimal modification of facial identity, where the synthesized expression could be controlled by either an expression label or directly transplanted from a reference UV map via feature transfer. Extensive experiments show that POCE can learn from unpaired face images effectively, and the learned model can generate realistic and high-fidelity facial expressions under various new poses

    Auto-regressive Image Synthesis with Integrated Quantization

    Full text link
    Deep generative models have achieved conspicuous progress in realistic image synthesis with multifarious conditional inputs, while generating diverse yet high-fidelity images remains a grand challenge in conditional image generation. This paper presents a versatile framework for conditional image generation which incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression that naturally leads to diverse image generation. Instead of independently quantizing the features of multiple domains as in prior research, we design an integrated quantization scheme with a variational regularizer that mingles the feature discretization in multiple domains, and markedly boosts the auto-regressive modeling performance. Notably, the variational regularizer enables to regularize feature distributions in incomparable latent spaces by penalizing the intra-domain variations of distributions. In addition, we design a Gumbel sampling strategy that allows to incorporate distribution uncertainty into the auto-regressive training procedure. The Gumbel sampling substantially mitigates the exposure bias that often incurs misalignment between the training and inference stages and severely impairs the inference performance. Extensive experiments over multiple conditional image generation tasks show that our method achieves superior diverse image generation performance qualitatively and quantitatively as compared with the state-of-the-art.Comment: Accepted to ECCV 2022 as Oral Presentatio

    Pose-Free Neural Radiance Fields via Implicit Pose Regularization

    Full text link
    Pose-free neural radiance fields (NeRF) aim to train NeRF with unposed multi-view images and it has achieved very impressive success in recent years. Most existing works share the pipeline of training a coarse pose estimator with rendered images at first, followed by a joint optimization of estimated poses and neural radiance field. However, as the pose estimator is trained with only rendered images, the pose estimation is usually biased or inaccurate for real images due to the domain gap between real images and rendered images, leading to poor robustness for the pose estimation of real images and further local minima in joint optimization. We design IR-NeRF, an innovative pose-free NeRF that introduces implicit pose regularization to refine pose estimator with unposed real images and improve the robustness of the pose estimation for real images. With a collection of 2D images of a specific scene, IR-NeRF constructs a scene codebook that stores scene features and captures the scene-specific pose distribution implicitly as priors. Thus, the robustness of pose estimation can be promoted with the scene priors according to the rationale that a 2D real image can be well reconstructed from the scene codebook only when its estimated pose lies within the pose distribution. Extensive experiments show that IR-NeRF achieves superior novel view synthesis and outperforms the state-of-the-art consistently across multiple synthetic and real datasets.Comment: Accepted by ICCV202

    GMLight: Lighting Estimation via Geometric Distribution Approximation

    Full text link
    Lighting estimation from a single image is an essential yet challenging task in computer vision and computer graphics. Existing works estimate lighting by regressing representative illumination parameters or generating illumination maps directly. However, these methods often suffer from poor accuracy and generalization. This paper presents Geometric Mover's Light (GMLight), a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation. We parameterize illumination scenes in terms of the geometric light distribution, light intensity, ambient term, and auxiliary depth, and estimate them as a pure regression task. Inspired by the earth mover's distance, we design a novel geometric mover's loss to guide the accurate regression of light distribution parameters. With the estimated lighting parameters, the generative projector synthesizes panoramic illumination maps with realistic appearance and frequency. Extensive experiments show that GMLight achieves accurate illumination estimation and superior fidelity in relighting for 3D object insertion.Comment: 12 pages, 11 figures. arXiv admin note: text overlap with arXiv:2012.1111

    Ferret badger rabies origin and its revisited importance as potential source of rabies transmission in Southeast China

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The frequent occurrence of ferret badger-associated human rabies cases in southeast China highlights the lack of laboratory-based surveillance and urges revisiting the potential importance of this animal in rabies transmission. To determine if the ferret badgers actually contribute to human and dog rabies cases, and the possible origin of the ferret badger-associated rabies in the region, an active rabies survey was conducted to determine the frequency of rabies infection and seroprevalence in dogs and ferret badgers.</p> <p>Methods</p> <p>A retrospective survey on rabies epidemics was performed in Zhejiang, Jiangxi and Anhui provinces in southeast China. The brain tissues from ferret badgers and dogs were assayed by fluorescent antibody test. Rabies virus was isolated and sequenced for phylogenetic analysis. The sera from ferret badgers and dogs were titrated using rabies virus neutralizing antibodies (VNA) test.</p> <p>Results</p> <p>The ferret badgers presented a higher percentage of rabies seroconversion than dogs did in the endemic region, reaching a maximum of 95% in the collected samples. Nine ferret badger-associated rabies viruses were isolated, sequenced, and were phylogenetically clustered as a separate group. Nucleotide sequence revealed 99.4-99.8% homology within the ferret badger isolates, and 83-89% homology to the dog isolates in the nucleoprotein and glycoprotein genes in the same rabies endemic regions.</p> <p>Conclusions</p> <p>Our data suggest ferret badger-associated rabies has likely formed as an independent enzootic originating from dogs during the long-term rabies infestation in southeast China. The eventual role of FB rabies in public health remains unclear. However, management of ferret badger bites, rabies awareness and control in the related regions should be an immediate need.</p

    Deep learning for facial expression editing

    No full text
    In this day and age of digital media, facial expression editing, which aims to transform the facial expression of a source facial image to a desired one without changing the face identity, has attracted increasing interest from both academia and industrial communities due to its wide applications in many tasks. Automatic facial expression editing has been explored extensively with the prevalence of generative adversarial networks in recent years. Although some research works have been reported and achieved very promising progress, the task of facial expression editing is still facing four major challenges, including the unsatisfactory editing quality issue, the constrained data annotation issue, the limited controllability issue and the multi-modality issue. This thesis focuses on the above-mentioned challenges in facial expression editing task and introduces several novel deep-learning-based techniques to alleviate the corresponding challenges. Extensive experiments show that the proposed approaches achieve superior performance in facial expression editing.Doctor of Philosoph
    • …
    corecore