8,039 research outputs found

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Scene classification from degraded images: comparing human and computer vision performance

    Get PDF
    People can recognize the context of a scene with just a brief glance. Visual information such as color, objects and their properties, and texture are all important in correctly determining the type of scene (e.g. indoors versus outdoors). Although these properties are all useful, it is unclear which features of an image play a more important role in the task of scene recognition. To this aim, we compare and contrast a state-of-the-art neural network and GIST model with human performance on the task of classifying images as indoors or outdoors. We analyze the impact of image manipulations, such as blurring and scrambling, on computational models of scene recognition and human perception. We then create and analyze a measure of local-global information to represent how each perceptual system relies on local and global image features. Finally, we train a variety of neural networks on degraded images to attempt to build a neural network that emulates human performance on both classificaton accuracies and this local-global measure

    What image features guide lightness perception?

    Get PDF
    Lightness constancy is the ability to perceive black and white surface colors under a wide range of lighting conditions. This fundamental visual ability is not well understood, and current theories differ greatly on what image features are important for lightness perception. Here we measured classification images for human observers and four models of lightness perception to determine which image regions influenced lightness judgments. The models were a high-pass-filter model, an oriented difference-of-Gaussians model, an anchoring model, and an atmospheric-link-function model. Human and model observers viewed three variants of the argyle illusion (Adelson, 1993) and judged which of two test patches appeared lighter. Classification images showed that human lightness judgments were based on local, anisotropic stimulus regions that were bounded by regions of uniform lighting. The atmospheric-link-function and anchoring models predicted the lightness illusion perceived by human observers, but the high-pass-filter and oriented-difference-of-Gaussians models did not. Furthermore, all four models produced classification images that were qualitatively different from those of human observers, meaning that the model lightness judgments were guided by different image regions than human lightness judgments. These experiments provide a new test of models of lightness perception, and show that human observers' lightness computations can be highly local, as in low-level models, and nevertheless depend strongly on lighting boundaries, as suggested by midlevel models.York University Librarie

    Motion processing deficits in migraine are related to contrast sensitivity

    Get PDF
    Background: There are conflicting reports concerning the ability of people with migraine to detect and discriminate visual motion. Previous studies used different displays and none adequately assessed other parameters that could affect performance, such as those that could indicate precortical dysfunction. Methods: Motion-direction detection, discrimination and relative motion thresholds were compared from participants with and without migraine. Potentially relevant visual covariates were included (contrast sensitivity; acuity; stereopsis; visual discomfort, stress, triggers; dyslexia). Results: For each task, migraine participants were less accurate than a control group and had impaired contrast sensitivity, greater visual discomfort, visual stress and visual triggers. Only contrast sensitivity correlated with performance on each motion task; it also mediated performance. Conclusions: Impaired performance on certain motion tasks can be attributed to impaired contrast sensitivity early in the visual system rather than a deficit in cortical motion processing per se. There were, however, additional differences for global and relative motion thresholds embedded in noise, suggesting changes in extrastriate cortex in migraine. Tasks to study the effects of noise on performance at different levels of the visual system and across modalities are recommended. A battery of standard visual tests should be included in any future work on the visual system and migraine

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    "Visual Affluence" in social photography: applicability of image segmentation as a visually oriented approach to study Instagram hashtags

    Get PDF
    The aim of the study is to examine the applicability of image segmentation – identification of objects/regions by partitioning images – to examine online social photography. We argue that the need for a meaning-independent reading of online social photography within social markers, such as hashtags, arises due to two characteristics of social photography: 1) internal incongruence resulting from user-driven construction, and 2) variability of content in terms of visual attributes, such as colour combinations, brightness, and details in backgrounds. We suggest visual affluence- plenitude of visual stimuli, such as objects and surfaces containing a variety of colour regions, present in visual imagery- as a basis for classifying visual content and image segmentation as a technique to measure affluence. We demonstrate that images containing objects with complex texture and background patterns are more affluent, while images that include blurry backgrounds are less affluent than others. Moreover, images that contain letters and dark, single-colour backgrounds are less affluent than images that include subtle shades. Mann-Whitney U test results for nine pairs of hashtags showed that seven out of nine pairs had significant differences in visual affluence. The proposed measure can be used to encourage a ‘visually oriented’ turn in online social photography research that can benefit from hybrid methods that are able to extrapolate micro-level findings to macro-level effects
    • …
    corecore