33,745 research outputs found
Perception Driven Texture Generation
This paper investigates a novel task of generating texture images from
perceptual descriptions. Previous work on texture generation focused on either
synthesis from examples or generation from procedural models. Generating
textures from perceptual attributes have not been well studied yet. Meanwhile,
perceptual attributes, such as directionality, regularity and roughness are
important factors for human observers to describe a texture. In this paper, we
propose a joint deep network model that combines adversarial training and
perceptual feature regression for texture generation, while only random noise
and user-defined perceptual attributes are required as input. In this model, a
preliminary trained convolutional neural network is essentially integrated with
the adversarial framework, which can drive the generated textures to possess
given perceptual attributes. An important aspect of the proposed model is that,
if we change one of the input perceptual features, the corresponding appearance
of the generated textures will also be changed. We design several experiments
to validate the effectiveness of the proposed method. The results show that the
proposed method can produce high quality texture images with desired perceptual
properties.Comment: 7 pages, 4 figures, icme201
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
Diversified Texture Synthesis with Feed-forward Networks
Recent progresses on deep discriminative and generative modeling have shown
promising results on texture synthesis. However, existing feed-forward based
methods trade off generality for efficiency, which suffer from many issues,
such as shortage of generality (i.e., build one network per texture), lack of
diversity (i.e., always produce visually identical output) and suboptimality
(i.e., generate less satisfying visual effects). In this work, we focus on
solving these issues for improved texture synthesis. We propose a deep
generative feed-forward network which enables efficient synthesis of multiple
textures within one single network and meaningful interpolation between them.
Meanwhile, a suite of important techniques are introduced to achieve better
convergence and diversity. With extensive experiments, we demonstrate the
effectiveness of the proposed model and techniques for synthesizing a large
number of textures and show its applications with the stylization.Comment: accepted by CVPR201
Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection
Selective weeding is one of the key challenges in the field of agriculture
robotics. To accomplish this task, a farm robot should be able to accurately
detect plants and to distinguish them between crop and weeds. Most of the
promising state-of-the-art approaches make use of appearance-based models
trained on large annotated datasets. Unfortunately, creating large agricultural
datasets with pixel-level annotations is an extremely time consuming task,
actually penalizing the usage of data-driven techniques. In this paper, we face
this problem by proposing a novel and effective approach that aims to
dramatically minimize the human intervention needed to train the detection and
classification algorithms. The idea is to procedurally generate large synthetic
training datasets randomizing the key features of the target environment (i.e.,
crop and weed species, type of soil, light conditions). More specifically, by
tuning these model parameters, and exploiting a few real-world textures, it is
possible to render a large amount of realistic views of an artificial
agricultural scenario with no effort. The generated data can be directly used
to train the model or to supplement real-world images. We validate the proposed
methodology by using as testbed a modern deep learning based image segmentation
architecture. We compare the classification results obtained using both real
and synthetic images as training data. The reported results confirm the
effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201
- …