712 research outputs found

    Bayesian models of eye movement selection with retinotopic maps

    Get PDF
    Abstract Among the various possible criteria guiding eye movement selection, we investigate the role of position uncertainty in the peripheral visual field. In particular, we suggest that, in everyday life situations of object tracking, eye movement selection probably includes a principle of reduction of uncertainty. To evaluate this hypothesis, we confront the movement predictions of computational models with human results from a psychophysical task. This task is a freely moving eye version of the Multiple Object Tracking task, where the eye movements may be used to compensate for low peripheral resolution. We design several Bayesian models of eye movement selection with increasing complexity, whose layered structures are inspired by the neurobiology of the brain areas implied this process. Finally, we compare the relative performances of these models with regard to the prediction of the recorded human movements, and show th

    The reference frame for encoding and retention of motion depends on stimulus set size

    Get PDF
    YesThe goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information

    Spatial Updating in Human Cortex

    Get PDF
    Single neurons in several cortical areas in monkeys update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. The central hypothesis here is that spatial updating also occurs in humans and that it can be visualized with functional MRI.In Chapter 2, we describe experiments in which we tested the role of human parietal cortex in spatial updating. We scanned subjects during a task that involved remapping of visual signals across hemifields. This task is directly analogous to the single-step saccade task used to test spatial updating in monkeys. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. Our results demonstrate that updating of visual information occurs in human parietal cortex and can be visualized with fMRI.The experiments in Chapter 2 show that updated visual responses have a characteristic latency and response shape. Chapter 3 describes a statistical model for estimating these parameters. The method is based on a nonlinear, fully Bayesian, hierarchical model that decomposes the fMRI time series data into baseline, smooth drift, activation signal, and noise. This chapter shows that this model performs well relative to commonly-used general linear models. In Chapter 4, we use the statistical method described in Chapter 3 to test for the presence of spatial updating activity in human extrastriate visual cortex. We identified the borders of several retinotopically defined visual areas in the occipital lobe. We then tested for spatial updating using the single step saccade task. We found a roughly monotonic relationship between the strength of updating activity and position in the visual area hierarchy. We observed the strongest responses in area V4, and the weakest response in V1. We conclude that updating is not restricted to brain regions involved primarily in attention and the generation of eye movements, but rather, is present in occipital lobe visual areas as well

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio

    Action and behavior: a free-energy formulation

    Get PDF
    We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Explicit uncertainty for eye movement selection

    Get PDF
    ISBN : 978-2-9532965-0-1In this paper, we consider the issue of the selection of eye movements in an eye-free Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment

    Active inference and the anatomy of oculomotion

    Get PDF
    Given that eye movement control can be framed as an inferential process, how are the requisite forces generated to produce anticipated or desired fixation? Starting from a generative model based on simple Newtonian equations of motion, we derive a variational solution to this problem and illustrate the plausibility of its implementation in the oculomotor brainstem. We show, through simulation, that the Bayesian filtering equations that implement ‘planning as inference’ can generate both saccadic and smooth pursuit eye movements. Crucially, the associated message passing maps well onto the known connectivity and neuroanatomy of the brainstem – and the changes in these messages over time are strikingly similar to single unit recordings of neurons in the corresponding nuclei. Furthermore, we show that simulated lesions to axonal pathways reproduce eye movement patterns of neurological patients with damage to these tracts

    The interaction between human vision and eye movements in health and disease

    Get PDF
    Human motor behaviour depends on the successful integration of vision and eye movements. Many studies have investigated neural correlates of visual processing in humans, but typically with the eyes stationary and fixated centrally. Similarly, many studies have sought to characterise which brain areas are responsible for oculomotor control, but generally in the absence of visual stimulation. The few studies to explicitly study the interaction between visual perception and eye movements suggest strong influences of both static and dynamic eye position on visual processing and modulation of oculomotor structures by properties of visual stimuli. However, the neural mechanisms underlying these interactions are poorly understood. This thesis uses a range of fMRI methodologies such as retinotopic mapping, multivariate analsyis techniques, dynamic causal modelling and ultra high resolution imaging to examine the interactions between the oculomotor and visual systems in the normal human brain. The results of the experiments presented in this thesis demonstrate that oculomotor behaviour has complex effects on activity in visual areas, while spatial properites of visual stimuli modify activity in oculomotor areas. Specifically, responses in the lateral geniculate nucleus and early cortical visual areas are modulated by saccadic eye movements (a process potentially mediated by the frontal eye fields) and by changes in static eye position. Additionally, responses in oculomotor structures such as the superior colliculus are biased for visual stimuli presented in the temporal rather than nasal hemifield. These findings reveal that although the visual and oculomotor systems are spatially segregated in the brain, they show a high degree of integration at the neural level. This is consistent with our everyday experience of the visual world where frequent eye movements do not lead to disruption of visual continuity and visual information is seamlessly transformed into motor behaviour
    corecore