1,516 research outputs found
High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks
Synthesizing face sketches from real photos and its inverse have many
applications. However, photo/sketch synthesis remains a challenging problem due
to the fact that photo and sketch have different characteristics. In this work,
we consider this task as an image-to-image translation problem and explore the
recently popular generative models (GANs) to generate high-quality realistic
photos from sketches and sketches from photos. Recent GAN-based methods have
shown promising results on image-to-image translation problems and
photo-to-sketch synthesis in particular, however, they are known to have
limited abilities in generating high-resolution realistic images. To this end,
we propose a novel synthesis framework called Photo-Sketch Synthesis using
Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution
to high resolution images in an adversarial way. The hidden layers of the
generator are supervised to first generate lower resolution images followed by
implicit refinement in the network to generate higher resolution images.
Furthermore, since photo-sketch synthesis is a coupled/paired translation
problem, we leverage the pair information using CycleGAN framework. Both Image
Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to
demonstrate the superior performance of our framework in comparison to existing
state-of-the-art solutions. Code available at:
https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face
& Gesture Recognition (FG 2018)(Oral
FaceShop: Deep Sketch-based Face Image Editing
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.Comment: 13 pages, 20 figure
r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches
We start by asking an interesting yet challenging question, "If an eyewitness
can only recall the eye features of the suspect, such that the forensic artist
can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig.
1), can advanced computer vision techniques help generate the whole face
image?" A more generalized question is that if a large proportion (e.g., more
than 50%) of the face/sketch is missing, can a realistic whole face
sketch/image still be estimated. Existing face completion and generation
methods either do not conduct domain transfer learning or can not handle large
missing area. For example, the inpainting approach tends to blur the generated
region when the missing area is large (i.e., more than 50%). In this paper, we
exploit the potential of deep learning networks in filling large missing region
(e.g., as high as 95% missing) and generating realistic faces with
high-fidelity in cross domains. We propose the recursive generation by
bidirectional transformation networks (r-BTN) that recursively generates a
whole face/sketch from a small sketch/face patch. The large missing area and
the cross domain challenge make it difficult to generate satisfactory results
using a unidirectional cross-domain learning structure. On the other hand, a
forward and backward bidirectional learning between the face and sketch domains
would enable recursive estimation of the missing region in an incremental
manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial
constraint to encourage the generation of realistic faces/sketches. Extensive
experiments have been conducted to demonstrate the superior performance from
r-BTN as compared to existing potential solutions.Comment: Accepted by AAAI 201
Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation
While representation learning aims to derive interpretable features for
describing visual data, representation disentanglement further results in such
features so that particular image attributes can be identified and manipulated.
However, one cannot easily address this task without observing ground truth
annotation for the training data. To address this problem, we propose a novel
deep learning model of Cross-Domain Representation Disentangler (CDRD). By
observing fully annotated source-domain data and unlabeled target-domain data
of interest, our model bridges the information across data domains and
transfers the attribute information accordingly. Thus, cross-domain joint
feature disentanglement and adaptation can be jointly performed. In the
experiments, we provide qualitative results to verify our disentanglement
capability. Moreover, we further confirm that our model can be applied for
solving classification tasks of unsupervised domain adaptation, and performs
favorably against state-of-the-art image disentanglement and translation
methods.Comment: CVPR 2018 Spotligh
SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
Synthesizing realistic images from human drawn sketches is a challenging
problem in computer graphics and vision. Existing approaches either need exact
edge maps, or rely on retrieval of existing photographs. In this work, we
propose a novel Generative Adversarial Network (GAN) approach that synthesizes
plausible images from 50 categories including motorcycles, horses and couches.
We demonstrate a data augmentation technique for sketches which is fully
automatic, and we show that the augmented data is helpful to our task. We
introduce a new network building block suitable for both the generator and
discriminator which improves the information flow by injecting the input image
at multiple scales. Compared to state-of-the-art image translation methods, our
approach generates more realistic images and achieves significantly higher
Inception Scores.Comment: Accepted to CVPR 201
- …