1 research outputs found
DeepFacePencil: Creating Face Images from Freehand Sketches
In this paper, we explore the task of generating photo-realistic face images
from hand-drawn sketches. Existing image-to-image translation methods require a
large-scale dataset of paired sketches and images for supervision. They
typically utilize synthesized edge maps of face images as training data.
However, these synthesized edge maps strictly align with the edges of the
corresponding face images, which limit their generalization ability to real
hand-drawn sketches with vast stroke diversity. To address this problem, we
propose DeepFacePencil, an effective tool that is able to generate
photo-realistic face images from hand-drawn sketches, based on a novel dual
generator image translation network during training. A novel spatial attention
pooling (SAP) is designed to adaptively handle stroke distortions which are
spatially varying to support various stroke styles and different levels of
details. We conduct extensive experiments and the results demonstrate the
superiority of our model over existing methods on both image quality and model
generalization to hand-drawn sketches.Comment: ACM MM 2020 (oral