105 research outputs found
Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control
We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses. Our method is built upon recent neural scene representation and rendering works which learn representations of geometry and appearance from only 2D images. While existing works demonstrated compelling rendering of static scenes and playback of dynamic scenes, photo-realistic reconstruction and rendering of humans with neural implicit methods, in particular under user-controlled novel poses, is still difficult. To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose. A neural radiance field learns pose-dependent geometric deformations and pose- and view-dependent appearance effects in the canonical space from multi-view video input. To synthesize novel views of high fidelity dynamic geometry and appearance, we leverage 2D texture maps defined on the body model as latent variables for predicting residual deformations and the dynamic appearance. Experiments demonstrate that our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses. Furthermore, our method also supports body shape control of the synthesized results
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
Previous animatable 3D-aware GANs for human generation have primarily focused
on either the human head or full body. However, head-only videos are relatively
uncommon in real life, and full body generation typically does not deal with
facial expression control and still has challenges in generating high-quality
results. Towards applicable video avatars, we present an animatable 3D-aware
GAN that generates portrait images with controllable facial expression, head
pose, and shoulder movements. It is a generative model trained on unstructured
2D image collections without using 3D or video data. For the new task, we base
our method on the generative radiance manifold representation and equip it with
learnable facial and head-shoulder deformations. A dual-camera rendering and
adversarial learning scheme is proposed to improve the quality of the generated
faces, which is critical for portrait images. A pose deformation processing
network is developed to generate plausible deformations for challenging regions
such as long hair. Experiments show that our method, trained on unstructured 2D
images, can generate diverse and high-quality 3D portraits with desired control
over different properties.Comment: SIGGRAPH Asia 2023. Project Page:
https://yuewuhkust.github.io/AniPortraitGAN
GANHead: Towards Generative Animatable Neural Head Avatars
To bring digital avatars into people's lives, it is highly demanded to
efficiently generate complete, realistic, and animatable head avatars. This
task is challenging, and it is difficult for existing methods to satisfy all
the requirements at once. To achieve these goals, we propose GANHead
(Generative Animatable Neural Head Avatar), a novel generative head model that
takes advantages of both the fine-grained control over the explicit expression
parameters and the realistic rendering results of implicit representations.
Specifically, GANHead represents coarse geometry, fine-gained details and
texture via three networks in canonical space to obtain the ability to generate
complete and realistic head avatars. To achieve flexible animation, we define
the deformation filed by standard linear blend skinning (LBS), with the learned
continuous pose and expression bases and LBS weights. This allows the avatars
to be directly animated by FLAME parameters and generalize well to unseen poses
and expressions. Compared to state-of-the-art (SOTA) methods, GANHead achieves
superior performance on head avatar generation and raw scan fitting.Comment: Camera-ready for CVPR 2023. Project page:
https://wsj-sjtu.github.io/GANHead
VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis
Unsupervised learning of 3D-aware generative adversarial networks has lately
made much progress. Some recent work demonstrates promising results of learning
human generative models using neural articulated radiance fields, yet their
generalization ability and controllability lag behind parametric human models,
i.e., they do not perform well when generalizing to novel pose/shape and are
not part controllable. To solve these problems, we propose VeRi3D, a generative
human vertex-based radiance field parameterized by vertices of the parametric
human template, SMPL. We map each 3D point to the local coordinate system
defined on its neighboring vertices, and use the corresponding vertex feature
and local coordinates for mapping it to color and density values. We
demonstrate that our simple approach allows for generating photorealistic human
images with free control over camera pose, human pose, shape, as well as
enabling part-level editing
- …