2 research outputs found
GuidedStyle: Attribute Knowledge Guided Style Manipulation for Semantic Face Editing
Although significant progress has been made in synthesizing high-quality and
visually realistic face images by unconditional Generative Adversarial Networks
(GANs), there still lacks of control over the generation process in order to
achieve semantic face editing. In addition, it remains very challenging to
maintain other face information untouched while editing the target attributes.
In this paper, we propose a novel learning framework, called GuidedStyle, to
achieve semantic face editing on StyleGAN by guiding the image generation
process with a knowledge network. Furthermore, we allow an attention mechanism
in StyleGAN generator to adaptively select a single layer for style
manipulation. As a result, our method is able to perform disentangled and
controllable edits along various attributes, including smiling, eyeglasses,
gender, mustache and hair color. Both qualitative and quantitative results
demonstrate the superiority of our method over other competing methods for
semantic face editing. Moreover, we show that our model can be also applied to
different types of real and artistic face editing, demonstrating strong
generalization ability
Multi-Density Sketch-to-Image Translation Network
Sketch-to-image (S2I) translation plays an important role in image synthesis
and manipulation tasks, such as photo editing and colorization. Some specific
S2I translation including sketch-to-photo and sketch-to-painting can be used as
powerful tools in the art design industry. However, previous methods only
support S2I translation with a single level of density, which gives less
flexibility to users for controlling the input sketches. In this work, we
propose the first multi-level density sketch-to-image translation framework,
which allows the input sketch to cover a wide range from rough object outlines
to micro structures. Moreover, to tackle the problem of noncontinuous
representation of multi-level density input sketches, we project the density
level into a continuous latent space, which can then be linearly controlled by
a parameter. This allows users to conveniently control the densities of input
sketches and generation of images. Moreover, our method has been successfully
verified on various datasets for different applications including face editing,
multi-modal sketch-to-photo translation, and anime colorization, providing
coarse-to-fine levels of controls to these applications.Comment: 2020 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work