260 research outputs found

    Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints

    Get PDF
    In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints

    The Shift from Local to Global Visual Processing in 6-Year-Old Children Is Associated with Grey Matter Loss

    Get PDF
    International audienceBackground: A real-world visual scene consists of local elements (e.g. trees) that are arranged coherently into a global configuration (e.g. a forest). Children show psychological evolution from a preference for local visual information to an adult-like preference for global visual information, with the transition in visual preference occurring around 6 years of age. The brain regions involved in this shift in visual preference have not been described. Methods and Results: We used voxel-based morphometry (VBM) to study children during this developmental window to investigate changes in gray matter that underlie the shift from a bias for local to global visual information. Six-year-old children were assigned to groups according to their judgment on a global/local task. The first group included children who still presented with local visual processing biases, and the second group included children who showed global visual processing biases. VBM results indicated that compared to children with local visual processing biases, children with global visual processing biases had a loss of gray matter in the right occipital and parietal visuospatial areas. Conclusions: These anatomical findings are in agreement with previous findings in children with neurodevelopmental disorders and represent the first structural identification of brain regions that allow healthy children to develop a global perception of the visual world

    Proprioceptive Movement Illusions Due to Prolonged Stimulation: Reversals and Aftereffects

    Get PDF
    Background. Adaptation to constant stimulation has often been used to investigate the mechanisms of perceptual coding, but the adaptive processes within the proprioceptive channels that encode body movement have not been well described. We investigated them using vibration as a stimulus because vibration of muscle tendons results in a powerful illusion of movement. Methodology/Principal Findings. We applied sustained 90 Hz vibratory stimulation to biceps brachii, an elbow flexor and induced the expected illusion of elbow extension (in 12 participants). There was clear evidence of adaptation to the movement signal both during the 6-min long vibration and on its cessation. During vibration, the strong initial illusion of extension waxed and waned, with diminishing duration of periods of illusory movement and occasional reversals in the direction of the illusion. After vibration there was an aftereffect in which the stationary elbow seemed to move into flexion. Muscle activity shows no consistent relationship with the variations in perceived movement. Conclusion. We interpret the observed effects as adaptive changes in the central mechanisms that code movement in direction-selective opponent channels

    Optimality of Human Contour Integration

    Get PDF
    For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy

    Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry

    Get PDF
    Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation

    Gestalt structures in multi-person intersubjectivity

    Get PDF
    In this paper I argue that there are gestalt principles underlying intersubjective interactions and that this means that intersubjective ‘units’, can be recognised as unified gestalt wholes. The nub of the claim is that interactions within a ‘plural subject’ can be perceived by others outside this plural subject. Framed from the first-person perspective: I am able to recognise intersubjective interactions between multiple others who are not me. I argue that the terminology of gestalt structures is helpful in framing and understanding the non-reducible make-up of these relational units. I consequently defend the legitimacy of the claim that we can attend to more than one other person at once, holding multiple others as a single focus of attention insofar as we can attend to multiple others as a gestalt whole. I argue that it is therefore legitimate to talk about attending to, perceiving and addressing multiple others at the same time, in the second-person plural. I argue that this can be identified in the phenomenology of such interactions and in an analysis of the core underlying structures of these interactions
    corecore