3 research outputs found
Perception Driven Texture Generation
This paper investigates a novel task of generating texture images from
perceptual descriptions. Previous work on texture generation focused on either
synthesis from examples or generation from procedural models. Generating
textures from perceptual attributes have not been well studied yet. Meanwhile,
perceptual attributes, such as directionality, regularity and roughness are
important factors for human observers to describe a texture. In this paper, we
propose a joint deep network model that combines adversarial training and
perceptual feature regression for texture generation, while only random noise
and user-defined perceptual attributes are required as input. In this model, a
preliminary trained convolutional neural network is essentially integrated with
the adversarial framework, which can drive the generated textures to possess
given perceptual attributes. An important aspect of the proposed model is that,
if we change one of the input perceptual features, the corresponding appearance
of the generated textures will also be changed. We design several experiments
to validate the effectiveness of the proposed method. The results show that the
proposed method can produce high quality texture images with desired perceptual
properties.Comment: 7 pages, 4 figures, icme201
On Pixel-Based Texture Synthesis by Non-parametric Sampling
In this paper, we propose a pixel-based method for texture synthesis with non-parametric sampling. On top of the general framework of pixel-based approaches, our method has three distinguishing features: window size es-timation, seed point planting, and iterative refinement. The size of a win-dow is estimated to capture the structural components of the dominant scale embedded in the texture sample. To guide the pixel sampling process at the initial iteration, a grid of seed points are sampled from the example texture. Finally, an iterative refinement scheme is adopted to diffuse the non-stationarity artifact over the entire texture. Our objective is to enhance texture quality as much as possible with a minor sacrifice in efficiency in order to support our conjecture that the pixel-based approach would yield high quality images. 1 1st iteration 2nd iteration 3rd iteration 4th iteration 6th iteration 8th iteration 10th iteratio