15 research outputs found
PP-GAN : Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN
The objective of a style transfer is to maintain the content of an image
while transferring the style of another image. However, conventional research
on style transfer has a significant limitation in preserving facial landmarks,
such as the eyes, nose, and mouth, which are crucial for maintaining the
identity of the image. In Korean portraits, the majority of individuals wear
"Gat", a type of headdress exclusively worn by men. Owing to its distinct
characteristics from the hair in ID photos, transferring the "Gat" is
challenging. To address this issue, this study proposes a deep learning network
that can perform style transfer, including the "Gat", while preserving the
identity of the face. Unlike existing style transfer approaches, the proposed
method aims to preserve texture, costume, and the "Gat" on the style image. The
Generative Adversarial Network forms the backbone of the proposed network. The
color, texture, and intensity were extracted differently based on the
characteristics of each block and layer of the pre-trained VGG-16, and only the
necessary elements during training were preserved using a facial landmark mask.
The head area was presented using the eyebrow area to transfer the "Gat".
Furthermore, the identity of the face was retained, and style correlation was
considered based on the Gram matrix. The proposed approach demonstrated
superior transfer and preservation performance compared to previous studies
Realtime Fewshot Portrait Stylization Based On Geometric Alignment
This paper presents a portrait stylization method designed for real-time
mobile applications with limited style examples available. Previous learning
based stylization methods suffer from the geometric and semantic gaps between
portrait domain and style domain, which obstacles the style information to be
correctly transferred to the portrait images, leading to poor stylization
quality. Based on the geometric prior of human facial attributions, we propose
to utilize geometric alignment to tackle this issue. Firstly, we apply
Thin-Plate-Spline (TPS) on feature maps in the generator network and also
directly to style images in pixel space, generating aligned portrait-style
image pairs with identical landmarks, which closes the geometric gaps between
two domains. Secondly, adversarial learning maps the textures and colors of
portrait images to the style domain. Finally, geometric aware cycle consistency
preserves the content and identity information unchanged, and deformation
invariant constraint suppresses artifacts and distortions. Qualitative and
quantitative comparison validate our method outperforms existing methods, and
experiments proof our method could be trained with limited style examples (100
or less) in real-time (more than 40 FPS) on mobile devices. Ablation study
demonstrates the effectiveness of each component in the framework.Comment: 10 pages, 10 figure
MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga Drawing
Manga is a world popular comic form originated in Japan, which typically
employs black-and-white stroke lines and geometric exaggeration to describe
humans' appearances, poses, and actions. In this paper, we propose MangaGAN,
the first method based on Generative Adversarial Network (GAN) for unpaired
photo-to-manga translation. Inspired by how experienced manga artists draw
manga, MangaGAN generates the geometric features of manga face by a designed
GAN model and delicately translates each facial region into the manga domain by
a tailored multi-GANs architecture. For training MangaGAN, we construct a new
dataset collected from a popular manga work, containing manga facial features,
landmarks, bodies, and so on. Moreover, to produce high-quality manga faces, we
further propose a structural smoothing loss to smooth stroke-lines and avoid
noisy pixels, and a similarity preserving module to improve the similarity
between domains of photo and manga. Extensive experiments show that MangaGAN
can produce high-quality manga faces which preserve both the facial similarity
and a popular manga style, and outperforms other related state-of-the-art
methods.Comment: 17 page
Facial Expression Retargeting from Human to Avatar Made Easy
Facial expression retargeting from humans to virtual characters is a useful
technique in computer graphics and animation. Traditional methods use markers
or blendshapes to construct a mapping between the human and avatar faces.
However, these approaches require a tedious 3D modeling process, and the
performance relies on the modelers' experience. In this paper, we propose a
brand-new solution to this cross-domain expression transfer problem via
nonlinear expression embedding and expression domain translation. We first
build low-dimensional latent spaces for the human and avatar facial expressions
with variational autoencoder. Then we construct correspondences between the two
latent spaces guided by geometric and perceptual constraints. Specifically, we
design geometric correspondences to reflect geometric matching and utilize a
triplet data structure to express users' perceptual preference of avatar
expressions. A user-friendly method is proposed to automatically generate
triplets for a system allowing users to easily and efficiently annotate the
correspondences. Using both geometric and perceptual correspondences, we
trained a network for expression domain translation from human to avatar.
Extensive experimental results and user studies demonstrate that even
nonprofessional users can apply our method to generate high-quality facial
expression retargeting results with less time and effort.Comment: IEEE Transactions on Visualization and Computer Graphics (TVCG), to
appea
From rule-based to learning-based image-conditional image generation
Visual contents, such as movies, animations, computer games, videos and photos, are
massively produced and consumed nowadays. Most of these contents are the combination
of materials captured from real-world and contents synthesized by computers. Particularly,
computer-generated visual contents are increasingly indispensable in modern entertainment
and production. The generation of visual contents by computers is typically conditioned on
real-world materials, driven by the imagination of designers and artists, or a combination
of both. However, creating visual contents manually are both challenging and labor intensive.
Therefore, enabling computers to automatically or semi-automatically synthesize
needed visual contents becomes essential. Among all these efforts, a stream of research
is to generate novel images based on given image priors, e.g., photos and sketches. This
research direction is known as image-conditional image generation, which covers a wide
range of topics such as image stylization, image completion, image fusion, sketch-to-image
generation, and extracting image label maps. In this thesis, a set of novel approaches for
image-conditional image generation are presented.
The thesis starts with an exemplar-based method for facial image stylization in Chapter
2. This method involves a unified framework for facial image stylization based on a single
style exemplar. A two-phase procedure is employed, where the first phase searches a dense
and semantic-aware correspondence between the input and the exemplar images, and the
second phase conducts edge-preserving texture transfer. While this algorithm has the merit
of requiring only a single exemplar, it is constrained to face photos. To perform generalized
image-to-image translation, Chapter 3 presents a data-driven and learning-based method. Inspired by the dual learning paradigm designed for natural language translation [115], a
novel dual Generative Adversarial Network (DualGAN) mechanism is developed, which
enables image translators to be trained from two sets of unlabeled images from two domains.
This is followed by another data-driven method in Chapter 4, which learns multiscale
manifolds from a set of images and then enables synthesizing novel images that mimic
the appearance of the target image dataset. The method is named as Branched Generative
Adversarial Network (BranchGAN) and employs a novel training method that enables unconditioned
generative adversarial networks (GANs) to learn image manifolds at multiple
scales. As a result, we can directly manipulate and even combine latent manifold codes
that are associated with specific feature scales. Finally, to provide users more control over
image generation results, Chapter 5 discusses an upgraded version of iGAN [126] (iGANHD)
that significantly improves the art of manipulating high-resolution images through
utilizing the multi-scale manifold learned with BranchGAN
Deep Video Portraits
We present a novel approach that enables photo-realistic re-animation of
portrait videos using only an input video. In contrast to existing approaches
that are restricted to manipulations of facial expressions only, we are the
first to transfer the full 3D head position, head rotation, face expression,
eye gaze, and eye blinking from a source actor to a portrait video of a target
actor. The core of our approach is a generative neural network with a novel
space-time architecture. The network takes as input synthetic renderings of a
parametric face model, based on which it predicts photo-realistic video frames
for a given target actor. The realism in this rendering-to-video transfer is
achieved by careful adversarial training, and as a result, we can create
modified target videos that mimic the behavior of the synthetically-created
input. In order to enable source-to-target video re-animation, we render a
synthetic target video with the reconstructed head animation parameters from a
source video, and feed it into the trained network -- thus taking full control
of the target. With the ability to freely recombine source and target
parameters, we are able to demonstrate a large variety of video rewrite
applications without explicitly modeling hair, body or background. For
instance, we can reenact the full head using interactive user-controlled
editing, and realize high-fidelity visual dubbing. To demonstrate the high
quality of our output, we conduct an extensive series of experiments and
evaluations, where for instance a user study shows that our video edits are
hard to detect.Comment: SIGGRAPH 2018, Video: https://www.youtube.com/watch?v=qc5P2bvfl4
Image Data Augmentation from Small Training Datasets Using Generative Adversarial Networks (GANs)
The scarcity of labelled data is a serious problem since deep models generally require a large amount of training data to achieve desired performance. Data augmentation is widely adopted to enhance the diversity of original datasets and further improve the performance of deep learning models. Learning-based methods, compared to traditional techniques, are specialized in feature extraction, which enhances the effectiveness of data augmentation.
Generative adversarial networks (GANs), one of the learning-based generative models, have made remarkable advances in data synthesis. However, GANs still face many challenges in generating high-quality augmented images from small datasets because learning-based generative methods are difficult to create reliable outcomes without sufficient training data. This difficulty deteriorates the data augmentation applications using learning-based methods. In this thesis, to tackle the problem of labelled data scarcity and the training difficulty of augmenting image data from small datasets, three novel GAN models suitable for training with a small number of training samples have been proposed based on three different mapping relationships between the input and output images, including one-to-many mapping, one-to-one mapping, and many-to-many mapping. The proposed GANs employ limited training data, such as a small number of images and limited conditional features, and the synthetic images generated by the proposed GANs are expected to generate images of not only high generative quality but also desirable data diversity.
To evaluate the effectiveness of the augmented images generated by the proposed models, inception distances and human perception methods are adopted. Additionally, different image classification tasks were carried out and accuracies from using the original datasets and the augmented datasets were compared. Experimental results illustrate the image classification performance based on convolutional neural networks, i.e., AlexNet, GoogLeNet, ResNet and VGGNet, is comprehensively enhanced, and the scale of improvement is significant when a small number of training samples are involved