2,797 research outputs found
Photorealistic Style Transfer with Screened Poisson Equation
Recent work has shown impressive success in transferring painterly style to
images. These approaches, however, fall short of photorealistic style transfer.
Even when both the input and reference images are photographs, the output still
exhibits distortions reminiscent of a painting. In this paper we propose an
approach that takes as input a stylized image and makes it more photorealistic.
It relies on the Screened Poisson Equation, maintaining the fidelity of the
stylized image while constraining the gradients to those of the original input
image. Our method is fast, simple, fully automatic and shows positive progress
in making a stylized image photorealistic. Our results exhibit finer details
and are less prone to artifacts than the state-of-the-art.Comment: presented in BMVC 201
Recovering Faces from Portraits with Auxiliary Facial Attributes
Recovering a photorealistic face from an artistic portrait is a challenging
task since crucial facial details are often distorted or completely lost in
artistic compositions. To handle this loss, we propose an Attribute-guided Face
Recovery from Portraits (AFRP) that utilizes a Face Recovery Network (FRN) and
a Discriminative Network (DN). FRN consists of an autoencoder with residual
block-embedded skip-connections and incorporates facial attribute vectors into
the feature maps of input portraits at the bottleneck of the autoencoder. DN
has multiple convolutional and fully-connected layers, and its role is to
enforce FRN to generate authentic face images with corresponding facial
attributes dictated by the input attribute vectors. %Leveraging on the spatial
transformer networks, FRN automatically compensates for misalignments of
portraits. % and generates aligned face images. For the preservation of
identities, we impose the recovered and ground-truth faces to share similar
visual features. Specifically, DN determines whether the recovered image looks
like a real face and checks if the facial attributes extracted from the
recovered image are consistent with given attributes. %Our method can recover
high-quality photorealistic faces from unaligned portraits while preserving the
identity of the face images as well as it can reconstruct a photorealistic face
image with a desired set of attributes. Our method can recover photorealistic
identity-preserving faces with desired attributes from unseen stylized
portraits, artistic paintings, and hand-drawn sketches. On large-scale
synthesized and sketch datasets, we demonstrate that our face recovery method
achieves state-of-the-art results.Comment: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV
Image Sampling with Quasicrystals
We investigate the use of quasicrystals in image sampling. Quasicrystals
produce space-filling, non-periodic point sets that are uniformly discrete and
relatively dense, thereby ensuring the sample sites are evenly spread out
throughout the sampled image. Their self-similar structure can be attractive
for creating sampling patterns endowed with a decorative symmetry. We present a
brief general overview of the algebraic theory of cut-and-project quasicrystals
based on the geometry of the golden ratio. To assess the practical utility of
quasicrystal sampling, we evaluate the visual effects of a variety of
non-adaptive image sampling strategies on photorealistic image reconstruction
and non-photorealistic image rendering used in multiresolution image
representations. For computer visualization of point sets used in image
sampling, we introduce a mosaic rendering technique.Comment: For a full resolution version of this paper, along with supplementary
materials, please visit at
http://www.Eyemaginary.com/Portfolio/Publications.htm
Deep Photo Style Transfer
This paper introduces a deep-learning approach to photographic style transfer
that handles a large variety of image content while faithfully transferring the
reference style. Our approach builds upon the recent work on painterly transfer
that separates style from the content of an image by considering different
layers of a neural network. However, as is, this approach is not suitable for
photorealistic style transfer. Even when both the input and reference images
are photographs, the output still exhibits distortions reminiscent of a
painting. Our contribution is to constrain the transformation from the input to
the output to be locally affine in colorspace, and to express this constraint
as a custom fully differentiable energy term. We show that this approach
successfully suppresses distortion and yields satisfying photorealistic style
transfers in a broad variety of scenarios, including transfer of the time of
day, weather, season, and artistic edits
- …