46,166 research outputs found

    A Generative Model of Natural Texture Surrogates

    Full text link
    Natural images can be viewed as patchworks of different textures, where the local image statistics is roughly stationary within a small neighborhood but otherwise varies from region to region. In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch. To model the statistics of these texture parameters, we then developed suitable nonlinear transformations of the parameters that allowed us to fit their joint statistics with a multivariate Gaussian distribution. We find that the first 200 principal components contain more than 99% of the variance and are sufficient to generate textures that are perceptually extremely close to those generated with all 655 components. We demonstrate the usefulness of the model in several ways: (1) We sample ensembles of texture patches that can be directly compared to samples of patches from the natural image database and can to a high degree reproduce their perceptual appearance. (2) We further developed an image compression algorithm which generates surprisingly accurate images at bit rates as low as 0.14 bits/pixel. Finally, (3) We demonstrate how our approach can be used for an efficient and objective evaluation of samples generated with probabilistic models of natural images.Comment: 34 pages, 9 figure

    Perception Driven Texture Generation

    Full text link
    This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.Comment: 7 pages, 4 figures, icme201

    Visual Aftereffect Of Texture Density Contingent On Color Of Frame

    Get PDF
    An aftereffect of perceived texture density contingent on the color of a surrounding region is reported. In a series of experiments, participants were adapted, with fixation, to stimuli in which the relative density of two achromatic texture regions was perfectly correlated with the color presented in a surrounding region. Following adaptation, the perceived relative density of the two regions was contingent on the color of the surrounding region or of the texture elements themselves. For example, if high density on the left was correlated with a blue surround during adaptation (and high density on the right with a yellow surround), then in order for the left and right textures to appear equal in the assessment phase, denser texture was required on the left in the presence of a blue surround (and denser texture on the right in the context of a yellow surround). Contingent aftereffects were found (1) with black-and-white scatter-dot textures, (2) with luminance-balanced textures, and (3) when the texture elements, rather than the surrounds, were colored during assessment. Effect size was decreased when the elements themselves were colored, but also when spatial subportions of the surround were used for the presentation of color. The effect may be mediated by retinal color spreading (Pöppel, 1986) and appears consistent with a local associative account of contingent aftereffects, such as Barlow\u27s (1990) model of modifiable inhibition
    • …
    corecore