247 research outputs found
StyleGAN Salon: Multi-View Latent Optimization for Pose-Invariant Hairstyle Transfer
Our paper seeks to transfer the hairstyle of a reference image to an input
photo for virtual hair try-on. We target a variety of challenges scenarios,
such as transforming a long hairstyle with bangs to a pixie cut, which requires
removing the existing hair and inferring how the forehead would look, or
transferring partially visible hair from a hat-wearing person in a different
pose. Past solutions leverage StyleGAN for hallucinating any missing parts and
producing a seamless face-hair composite through so-called GAN inversion or
projection. However, there remains a challenge in controlling the
hallucinations to accurately transfer hairstyle and preserve the face shape and
identity of the input. To overcome this, we propose a multi-view optimization
framework that uses "two different views" of reference composites to
semantically guide occluded or ambiguous regions. Our optimization shares
information between two poses, which allows us to produce high fidelity and
realistic results from incomplete references. Our framework produces
high-quality results and outperforms prior work in a user study that consists
of significantly more challenging hair transfer scenarios than previously
studied. Project page: https://stylegan-salon.github.io/.Comment: Accepted to CVPR202
Face Restoration via Plug-and-Play 3D Facial Priors
State-of-the-art face restoration methods employ deep convolutional neural networks (CNNs) to learn a mapping between degraded and sharp facial patterns by exploring local appearance knowledge. However, most of these methods do not well exploit facial structures and identity information, and only deal with task-specific face restoration (e.g.,face super-resolution or deblurring). In this paper, we propose cross-tasks and cross-models plug-and-play 3D facial priors to explicitly embed the network with the sharp facial structures for general face restoration tasks. Our 3D priors are the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes (e.g., identity, facial expression, texture, illumination, and face pose). Furthermore, the priors can easily be incorporated into any network and are very efficient in improving the performance and accelerating the convergence speed. Firstly, a 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge. Secondly, for better exploiting this hierarchical information (i.e., intensity similarity, 3D facial structure, and identity content), a spatial attention module is designed for image restoration problems. Extensive face restoration experiments including face super-resolution and deblurring demonstrate that the proposed 3D priors achieve superior face restoration results over the state-of-the-art algorithm
- …