5 research outputs found
SD-GAN: Semantic Decomposition for Face Image Synthesis with Discrete Attribute
Manipulating latent code in generative adversarial networks (GANs) for facial
image synthesis mainly focuses on continuous attribute synthesis (e.g., age,
pose and emotion), while discrete attribute synthesis (like face mask and
eyeglasses) receives less attention. Directly applying existing works to facial
discrete attributes may cause inaccurate results. In this work, we propose an
innovative framework to tackle challenging facial discrete attribute synthesis
via semantic decomposing, dubbed SD-GAN. To be concrete, we explicitly
decompose the discrete attribute representation into two components, i.e. the
semantic prior basis and offset latent representation. The semantic prior basis
shows an initializing direction for manipulating face representation in the
latent space. The offset latent presentation obtained by 3D-aware semantic
fusion network is proposed to adjust prior basis. In addition, the fusion
network integrates 3D embedding for better identity preservation and discrete
attribute synthesis. The combination of prior basis and offset latent
representation enable our method to synthesize photo-realistic face images with
discrete attributes. Notably, we construct a large and valuable dataset MEGN
(Face Mask and Eyeglasses images crawled from Google and Naver) for completing
the lack of discrete attributes in the existing dataset. Extensive qualitative
and quantitative experiments demonstrate the state-of-the-art performance of
our method. Our code is available at: https://github.com/MontaEllis/SD-GAN.Comment: 16 pages, 12 figures, Accepted by ACM MM202
Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On
Fabricating and designing 3D garments has become extremely demanding with the
increasing need for synthesizing realistic dressed persons for a variety of
applications, e.g. 3D virtual try-on, digitalization of 2D clothes into 3D
apparel, and cloth animation. It thus necessitates a simple and straightforward
pipeline to obtain high-quality texture from simple input, such as 2D reference
images. Since traditional warping-based texture generation methods require a
significant number of control points to be manually selected for each type of
garment, which can be a time-consuming and tedious process. We propose a novel
method, called Cloth2Tex, which eliminates the human burden in this process.
Cloth2Tex is a self-supervised method that generates texture maps with
reasonable layout and structural consistency. Another key feature of Cloth2Tex
is that it can be used to support high-fidelity texture inpainting. This is
done by combining Cloth2Tex with a prevailing latent diffusion model. We
evaluate our approach both qualitatively and quantitatively and demonstrate
that Cloth2Tex can generate high-quality texture maps and achieve the best
visual effects in comparison to other methods. Project page:
tomguluson92.github.io/projects/cloth2tex/Comment: 15 pages, 15 figure
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis
10.1109/CVPR52688.2022.01790IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2022-June18429-1843
Multi-view Consistent Generative Adversarial Networks for Compositional 3D-Aware Image Synthesis
10.1007/s11263-023-01805-xINTERNATIONAL JOURNAL OF COMPUTER VISION13182219-224