1,206 research outputs found
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
Biologically Inspired Dynamic Textures for Probing Motion Perception
Perception is often described as a predictive process based on an optimal
inference with respect to a generative model. We study here the principled
construction of a generative model specifically crafted to probe motion
perception. In that context, we first provide an axiomatic, biologically-driven
derivation of the model. This model synthesizes random dynamic textures which
are defined by stationary Gaussian distributions obtained by the random
aggregation of warped patterns. Importantly, we show that this model can
equivalently be described as a stochastic partial differential equation. Using
this characterization of motion in images, it allows us to recast motion-energy
models into a principled Bayesian inference framework. Finally, we apply these
textures in order to psychophysically probe speed perception in humans. In this
framework, while the likelihood is derived from the generative model, the prior
is estimated from the observed results and accounts for the perceptual bias in
a principled fashion.Comment: Twenty-ninth Annual Conference on Neural Information Processing
Systems (NIPS), Dec 2015, Montreal, Canad
Texture Synthesis Through Convolutional Neural Networks and Spectrum Constraints
This paper presents a significant improvement for the synthesis of texture
images using convolutional neural networks (CNNs), making use of constraints on
the Fourier spectrum of the results. More precisely, the texture synthesis is
regarded as a constrained optimization problem, with constraints conditioning
both the Fourier spectrum and statistical features learned by CNNs. In contrast
with existing methods, the presented method inherits from previous CNN
approaches the ability to depict local structures and fine scale details, and
at the same time yields coherent large scale structures, even in the case of
quasi-periodic images. This is done at no extra computational cost. Synthesis
experiments on various images show a clear improvement compared to a recent
state-of-the art method relying on CNN constraints only
- …