1,945 research outputs found
GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks
Facial landmarks constitute the most compressed representation of faces and
are known to preserve information such as pose, gender and facial structure
present in the faces. Several works exist that attempt to perform high-level
face-related analysis tasks based on landmarks. In contrast, in this work, an
attempt is made to tackle the inverse problem of synthesizing faces from their
respective landmarks. The primary aim of this work is to demonstrate that
information preserved by landmarks (gender in particular) can be further
accentuated by leveraging generative models to synthesize corresponding faces.
Though the problem is particularly challenging due to its ill-posed nature, we
believe that successful synthesis will enable several applications such as
boosting performance of high-level face related tasks using landmark points and
performing dataset augmentation. To this end, a novel face-synthesis method
known as Gender Preserving Generative Adversarial Network (GP-GAN) that is
guided by adversarial loss, perceptual loss and a gender preserving loss is
presented. Further, we propose a novel generator sub-network UDeNet for GP-GAN
that leverages advantages of U-Net and DenseNet architectures. Extensive
experiments and comparison with recent methods are performed to verify the
effectiveness of the proposed method.Comment: 6 pages, 5 figures, this paper is accepted as 2018 24th International
Conference on Pattern Recognition (ICPR2018
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
- …