394 research outputs found
A geometrically aware auto-encoder for multi-texture synthesis
We propose an auto-encoder architecture for multi-texture synthesis. The
approach relies on both a compact encoder accounting for second order neural
statistics and a generator incorporating adaptive periodic content. Images are
embedded in a compact and geometrically consistent latent space, where the
texture representation and its spatial organisation are disentangled. Texture
synthesis and interpolation tasks can be performed directly from these latent
codes. Our experiments demonstrate that our model outperforms state-of-the-art
feed-forward methods in terms of visual quality and various texture related
metrics.Comment: Error in table 1 correcte
Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture
This paper addresses the problem of interpolating visual textures. We
formulate this problem by requiring (1) by-example controllability and (2)
realistic and smooth interpolation among an arbitrary number of texture
samples. To solve it we propose a neural network trained simultaneously on a
reconstruction task and a generation task, which can project texture examples
onto a latent space where they can be linearly interpolated and projected back
onto the image domain, thus ensuring both intuitive control and realistic
results. We show our method outperforms a number of baselines according to a
comprehensive suite of metrics as well as a user study. We further show several
applications based on our technique, which include texture brush, texture
dissolve, and animal hybridization.Comment: Accepted to CVPR'1
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
In this paper, we investigate deep image synthesis guided by sketch, color,
and texture. Previous image synthesis methods can be controlled by sketch and
color strokes but we are the first to examine texture control. We allow a user
to place a texture patch on a sketch at arbitrary locations and scales to
control the desired output texture. Our generative network learns to synthesize
objects consistent with these texture suggestions. To achieve this, we develop
a local texture loss in addition to adversarial and content loss to train the
generative network. We conduct experiments using sketches generated from real
images and textures sampled from a separate texture database and results show
that our proposed algorithm is able to generate plausible images that are
faithful to user controls. Ablation studies show that our proposed pipeline can
generate more realistic images than adapting existing methods directly.Comment: CVPR 2018 spotligh
- …