75 research outputs found

    The visual mismatch negativity is sensitive to symmetry as perceptual category

    Get PDF
    We investigated the sensitivity of visual mismatch negativity (vMMN) to an abstract and non-semantic category, the vertical mirror symmetry. Event-related potentials were recorded to random and symmetric square patterns, delivered in passive oddball paradigm (participants played a video game). In one of the conditions, symmetric patterns were frequent (standard) stimuli and the random patterns were infrequent (deviant) stimuli, in the other condition the probabilities were reversed. We compared the ERPs to symmetric stimuli as deviants and as standards, and similarly, the ERPs to the random deviants and random standards. As the difference between the ERPs to random deviant and random standard stimuli a posterior negativity emerged in two latency ranges (112–120-ms and 284–292-ms). These negativities were considered as visual mismatch negativity (vMMN) components. We suggest the two vMMN components are organized in cascade error signals. However, there was no significant difference between the ERPs to the symmetric deviants and symmetric standards. Emergence of vMMN to the deviant random stimuli is considered as a deviation of a perceptual category (in the symmetric standard’s sequence presented). Accordingly, random stimuli acquired no perceptual category, for this reason the symmetric deviant (in the random standard’s sequence presented) elicited no vMMN. The results show that the memory system underlying visual mismatch negativity is capable of coding perceptual categories such as bilateral symmetry, even if the stimulus patterns are unrelated to the ongoing behavior

    A neural signature of the unique hues

    Get PDF
    Since at least the 17th century there has been the idea that there are four simple and perceptually pure “unique” hues: red, yellow, green, and blue, and that all other hues are perceived as mixtures of these four hues. However, sustained scientific investigation has not yet provided solid evidence for a neural representation that separates the unique hues from other colors. We measured event-related potentials elicited from unique hues and the ‘intermediate’ hues in between them. We find a neural signature of the unique hues 230 ms after stimulus onset at a post-perceptual stage of visual processing. Specifically, the posterior P2 component over the parieto-occipital lobe peaked significantly earlier for the unique than for the intermediate hues (Z = -2.9, p = .004). Having identified a neural marker for unique hues, fundamental questions about the contribution of neural hardwiring, language and environment to the unique hues can now be addressed

    The development of contour processing : evidence from physiology and psychophysics

    Get PDF
    Object perception and pattern vision depend fundamentally upon the extraction of contours from the visual environment. In adulthood, contour or edge-level processing is supported by the Gestalt heuristics of proximity, collinearity, and closure. Less is known, however, about the developmental trajectory of contour detection and contour integration. Within the physiology of the visual system, long-range horizontal connections in V1 and V2 are the likely candidates for implementing these heuristics. While post-mortem anatomical studies of human infants suggest that horizontal interconnections reach maturity by the second year of life, psychophysical research with infants and children suggests a considerably more protracted development. In the present review, data from infancy to adulthood will be discussed in order to track the development of contour detection and integration. The goal of this review is thus to integrate the development of contour detection and integration with research regarding the development of underlying neural circuitry.We conclude that the ontogeny of this system is best characterized as a developmentally extended period of associative acquisition whereby horizontal connectivity becomes functional over longer and longer distances, thus becoming able to effectively integrate over greater spans of visual space. Keywords

    A Computational Model of Visual Anisotropy

    Get PDF
    Visual anisotropy has been demonstrated in multiple tasks where performance differs between vertical, horizontal, and oblique orientations of the stimuli. We explain some principles of visual anisotropy by anisotropic smoothing, which is based on a variation on Koenderink's approach in [1]. We tested the theory by presenting Gaussian elongated luminance profiles and measuring the perceived orientations by means of an adjustment task. Our framework is based on the smoothing of the image with elliptical Gaussian kernels and it correctly predicted an illusory orientation bias towards the vertical axis. We discuss the scope of the theory in the context of other anisotropies in perception

    Developing The Leuven Embedded Figures Test (L-EFT): Testing the stimulus features that influence embedding

    Get PDF
    Background The Embedded Figures Test (EFT, developed by Witkin and colleagues (1971)) has been used extensively in research on individual differences, particularly in the study of autism spectrum disorder. The EFT was originally conceptualized as a measure of field (in)dependence, but in recent years performance on the EFT has been interpreted as a measure of local versus global perceptual style. Although many have used the EFT to measure perceptual style, relatively few have focused on understanding the stimulus features that cause a shape to become embedded. The primary aim of this work was to investigate the relation between the strength of embedding and perceptual grouping on a group level. Method New embedded figure stimuli (both targets and contexts) were developed in which stimulus features that may influence perceptual grouping were explicitly manipulated. The symmetry, closure and complexity of the target shape were manipulated as well as its good continuation by varying the number of lines from the target that continued into the context. We evaluated the effect of these four stimulus features on target detection in a new embedded figures task (Leuven Embedded Figures Test, L-EFT) in a group of undergraduate psychology students. The results were then replicated in a second experiment using a slightly different version of the task. Results Stimulus features that influence perceptual grouping, especially good continuation and symmetry, clearly affected performance (lower accuracy, slower response times) on the L-EFT. Closure did not yield results in line with our predictions. Discussion These results show that some stimulus features, which are known to affect perceptual grouping, also influence how effectively a stimulus becomes embedded in different contexts. Whether these results imply that the EFT measures individual differences in perceptual grouping ability must be further investigated

    Visual Exploration and Object Recognition by Lattice Deformation

    Get PDF
    Mechanisms of explicit object recognition are often difficult to investigate and require stimuli with controlled features whose expression can be manipulated in a precise quantitative fashion. Here, we developed a novel method (called “Dots”), for generating visual stimuli, which is based on the progressive deformation of a regular lattice of dots, driven by local contour information from images of objects. By applying progressively larger deformation to the lattice, the latter conveys progressively more information about the target object. Stimuli generated with the presented method enable a precise control of object-related information content while preserving low-level image statistics, globally, and affecting them only little, locally. We show that such stimuli are useful for investigating object recognition under a naturalistic setting – free visual exploration – enabling a clear dissociation between object detection and explicit recognition. Using the introduced stimuli, we show that top-down modulation induced by previous exposure to target objects can greatly influence perceptual decisions, lowering perceptual thresholds not only for object recognition but also for object detection (visual hysteresis). Visual hysteresis is target-specific, its expression and magnitude depending on the identity of individual objects. Relying on the particular features of dot stimuli and on eye-tracking measurements, we further demonstrate that top-down processes guide visual exploration, controlling how visual information is integrated by successive fixations. Prior knowledge about objects can guide saccades/fixations to sample locations that are supposed to be highly informative, even when the actual information is missing from those locations in the stimulus. The duration of individual fixations is modulated by the novelty and difficulty of the stimulus, likely reflecting cognitive demand

    Local and global determinants of 2-D shape perception

    No full text
    The purpose of this doctoral research is a better understanding of the mechanisms underlying 2-D shape perception. A crucial step in establishing a stable shape percept is to determine which regions of the visual input belong to the same object. A number of local and global grouping principles that shape our visual perception have been postulated by the Gestaltists in the early twentieth century. We are interested in the processes behind these grouping principles, and their contribution to figureground segregation and surface-contour integration. The primary focus of our work is on vertical mirror-symmetry, a salient shape property thought to contribute to perceptual grouping. Some Gestalt principles act on a local level (e.g. proximity), whereas others might require more global processing (e.g. closure). Vertical mirror-symmetry has the interesting feature that it can act both locally (symmetry of the constituting elements) and globally (symmetry of the global shape outline). We constructed stimuli in which both local and global shape characteristics can be manipulated. The same stimulus set allows for the study of surface-contour integration. In a first experiment we used a simple psychophysical task to investigate whether vertical mirrorsymmetry acts as a cue in figure-ground segregation. We asked participants to indicate which of two sequentially presented Gabor arrays contained a visual shape. The shape was defined by a subset of Gabor elements positioned along the outline of an unfamiliar shape. By adding orientation noise to these Gabor elements, the shape percept became less salient. Across the different noise levels, symmetric shapes were easier to detect than asymmetric shapes. No interaction between local contour properties (i.e. good continuation) and global shape properties (i.e. symmetry) was present. Our results indicate that observers spontaneously use vertical mirror-symmetry as a cue in perceptual grouping. A manuscript of this study has recently been submitted for publication in Journal of Vision. In the first experiment we used the same noise levels for all participants. Because of large interindividual differences in task performance, we adjusted the stimulus presentation time to the individual performance. In a second experiment we used the same presentation time for all participants, but adjusted the noise levels to the individual performance. Data for this experiment are currently being collected. Performance level in the above psychophysical tasks is only one way to measure the effect of symmetry on figure-ground segmentation. Although significant, the observed effect in the first experiment was rather small. Therefore we argued that reaction times might prove to be a more sensible measure of the symmetry effect (third experiment). A second line of research focuses on the integration of surface and contour information. By adding orientation noise to the Gabor elements on the contour or in the interior of the shape we could assess the contribution of both cues to figure-ground segregation. We applied an ideal observer model to the data obtained in the single cue conditions, and compared this model to the observed performance in the combined cue condition. This allowed us to check whether the two sources of information (surface and contour) were combined in a statistically optimal way. Preliminary data showed that the combined cue condition could outperform the ideal observer prediction. This argues for mutual reinforcement of the two information sources. We are currently running two separate experiments in this context. The fourth experiment focuses on the integration of symmetrical surface and contour elements, whereas the fifth experiment focuses on the integration of asymmetrical surface and contour elements. In an attempt to show that integration is a dynamic process that evolves over time we will run a sixth experiment in which we manipulate the stimulus presentation time. We are planning one ERP experiment to study the difference between local and global grouping by symmetry. This could help us to gain further insight in the time course of perceptual grouping and its neural underpinnings. A second ERP study will focus on the integration of contour and surface information. As a final study we plan an fMRI experiment in the same context.status: publishe

    Improved intersubject colocalization of functional activations by nonlinear fitting of individual contrast maps to a functional atlas

    No full text
    A comparison of activation maps obtained with fMRI reveals substantial inter-individual variability in the anatomical location of activated areas. In group studies functional data are smoothed with a Gaussian kernel to increase the functional overlap between activated areas from different subjects. We present an alternative approach to increase this overlap, based on non-linear deformations of individual contrast maps to a sample-specific minimal deformation target. Functional images were obtained from six adult subjects passively viewing a short stimulus sequence. The stimuli were organized in a blocked design with three conditions: static objects, faces and moving natural scenes. All functional images were spatially normalized to the standard MNI template. Individual statistical maps were calculated, contrasting each of the three conditions with a fixation condition. Deformation fields resulting from viscous fluid registrations between these individual contrast maps were obtained to create sample-specific minimal deformation targets for the three contrasts. For each subject we then calculated the average deformation needed to register each contrast map with its associated target. This subject-specific deformation field was subsequently applied to all functional images for that specific subject. This procedure does increase the inter-subject overlap in activated areas, as evidenced by a comparison between the results of fixed-effects group analyses on the deformed and on the undeformed functional images. The analysis on the functionally deformed images yields about one third more activated clusters than the analysis on the functionally undeformed images (33 versus 24). With functionally deformed images the contrast between moving scenes and fixation reveals a number of areas that are not significantly activated with functionally undeformed images: hMT/V5, the frontal eye fields (FEF) and both the anterior and lateral dorsal intraparietal sulcus regions (DIPSA and DIPSL). We believe the enhanced functional overlap generated by this functional atlas fitting paradigm improves the analysis of functional group data. Moreover, this paradigm provides a means for a direct mapping of a functional brain atlas to individual contrast maps. As such, it may enable automated labeling of functional areas and as such provide a valuable diagnostic tool for functional patient studies.status: publishe

    Integration of contour and surface information in shape detection

    Get PDF
    AbstractIn studies of shape perception, the detection of contours and the segregation of regions enclosed by these contours have mostly been treated in isolation. However, contours and surfaces somehow need to be combined to create a stable perception of shape. In this study, we used a 2AFC task with arrays of oriented Gabor elements to determine whether and to what extent human observers integrate information from the contour and from the interior surface of a shape embedded in this array. The saliency of the shapes depended on the alignment of Gabors along the shape outline and on the isolinearity of Gabors inside the shape. In two experiments we measured detectability of shapes defined by the contour cue, by the surface cue, and by the combination of both cues. As a first step, we matched performance in the two single-cue conditions. We then compared shape detectability in the double-cue condition with the two equally detectable single-cue conditions. Our results show a clear double-cue benefit: Participants used both cues to detect the shapes. Next, we compared performance in the double-cue condition with the performance predicted by two models of sensory cue combination: a minimum rule (probability summation) and an integration rule (information summation). Results from Experiment 2 indicate that participants applied a combination rule that was better than mere probability summation. We found no evidence against the integration rule
    corecore