9 research outputs found

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Target Selection by Frontal Cortex During Coordinated Saccadic and Smooth Pursuit Eye Movement

    Full text link
    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Seuratun kappaleen poikkeuttaminen silmänräpäysten aikana: käyttäytymis- ja neuromagneettisia havaintoja

    Get PDF
    The visual world is perceived as continuous despite frequent interruptions of sensory data due to eyeblinks and rapid eye movements. To create the perception of constancy, the brain makes use of fill-in mechanisms. This study presents an experiment in which the location of an object during smooth pursuit tracking is altered during eyeblinks. The experiment investigates the effects of blink suppression and fill-in mechanisms to cloud the discrimination of these changes. We employed a motion-tracking task, which promotes the accurate evaluation of the object’s trajectory and thus can counteract the fill-in mechanisms. Six subjects took part in the experiment, during which they were asked to report any perceived anomalies in the trajectory. Eye movements were monitored with a video-based tracking and brain responses with simultaneous MEG recordings. Discrimination success was found to depend on the direction of the displacement, and was significantly modulated by prior knowledge of the triggered effect. Eye-movement data were congruent with previous findings and revealed a smooth transition from blink recovery to object locating. MEG recordings were analysed for condition-dependent evoked and induced responses; however, intersubject variability was too large for drawing clear conclusions regarding the brain basis of the fill-in mechanisms.Visuaalinen maailma koetaan jatkuvana, vaikka silmänräpäykset ja nopeat silmänliikkeet aiheuttavat keskeytyksiä sensoriseen tiedonkeruuseen. Luodakseen käsityksen pysyvyydestä, aivot käyttävät täyttömekanismeja. Tämä tutkimus esittelee kokeen, jossa kappaleen seurantaa hitailla seurantaliikkeillä häiritään muuttamalla sen sijaintia silmänräpäysten aikana. Tämä koe tutkii, kuinka silmänräpäysten aiheuttama suppressio ja täyttömekanismit sumentavat kykyä erotella näitä muutoksia. Käytimme liikeseurantatehtävää, joka vastaavasti edistää kappaleen liikeradan tarkkaa arviointia. Kuusi koehenkilöä osallistui kokeeseen, jonka aikana heitä pyydettiin ilmoittamaan kaikki havaitut poikkeamat kappaleen liikeradassa. Silmänliikkeitä tallennettiin videopohjaisella seurannalla, ja aivovasteita yhtäaikaisella MEG:llä. Erottelykyvyn todettiin riippuvan poikkeutuksen suunnasta, sekä merkittävästi a priori tiedosta poikkeutusten esiintymistavasta. Silmänliikedata oli yhtenevää aiempien tutkimusten kanssa, ja paljasti sujuvan siirtymisen silmänräpäyksistä palautumisesta kappaleen paikallistamiseen. MEG-tallenteet analysoitiin ehdollisten heräte- ja indusoitujen vasteiden löytämiseksi, mutta yksilölliset vaste-erot koehenkilöiden välillä olivat liian suuria selkeiden johtopäätösten tekemiseksi täyttömekanismien aivoperustasta

    Neural dynamics of invariant object recognition: relative disparity, binocular fusion, and predictive eye movements

    Full text link
    How does the visual cortex learn invariant object categories as an observer scans a depthful scene? Two neural processes that contribute to this ability are modeled in this thesis. The first model clarifies how an object is represented in depth. Cortical area V1 computes absolute disparity, which is the horizontal difference in retinal location of an image in the left and right foveas. Many cells in cortical area V2 compute relative disparity, which is the difference in absolute disparity of two visible features. Relative, but not absolute, disparity is unaffected by the distance of visual stimuli from an observer, and by vergence eye movements. A laminar cortical model of V2 that includes shunting lateral inhibition of disparity-sensitive layer 4 cells causes a peak shift in cell responses that transforms absolute disparity from V1 into relative disparity in V2. The second model simulates how the brain maintains stable percepts of a 3D scene during binocular movements. The visual cortex initiates the formation of a 3D boundary and surface representation by binocularly fusing corresponding features from the left and right retinotopic images. However, after each saccadic eye movement, every scenic feature projects to a different combination of retinal positions than before the saccade. Yet the 3D representation, resulting from the prior fusion, is stable through the post-saccadic re-fusion. One key to stability is predictive remapping: the system anticipates the new retinal positions of features entailed by eye movements by using gain fields that are updated by eye movement commands. The 3D ARTSCAN model developed here simulates how perceptual, attentional, and cognitive interactions across different brain regions within the What and Where visual processing streams interact to coordinate predictive remapping, stable 3D boundary and surface perception, spatial attention, and the learning of object categories that are invariant to changes in an object's retinal projections. Such invariant learning helps the system to avoid treating each new view of the same object as a distinct object to be learned. The thesis hereby shows how a process that enables invariant object category learning can be extended to also enable stable 3D scene perception

    Impact of Extremely Low-Frequency Magnetic and Electric Stimuli on Vestibular-Driven Outcomes

    Get PDF
    The vestibular system is extremely sensitive to electric fields (E-fields). Indeed, vestibular hair cells are graded potential cells and this property makes them very susceptible to small membrane potential modulations. Studies show that extremely low-frequency magnetic fields (ELF-MF) induced E-fields impact postural control in which the vestibular system plays an important role. However, the knowledge of whether this is indeed a vestibular specific effect is still pending. Considering its crucial role and the specific neurophysiological characteristics of its hair cells, the vestibular system emerges as an ELF-MF likely target The three studies presented in this thesis aimed to further address whether ELF-MF modulate vestibular-driven outcomes. Studies 1 and 2 aimed to investigate postural responses while more specifically targeting the vestibular system. However, we did not find any modulation in either study. Nonetheless, based on both studies, study 3 aimed to determine whether the orientation and frequency of our stimulations were more likely to target the otoliths. Therefore, the third study looked at the subjective visual vertical. Here, we found a potential ELF-MF utricular modulation. This thesis is the first steppingstone in a new field of research. Further investigations regarding the interaction between the ELF-MF and the vestibular system will have to look at more reflexives vestibular outcomes. Nonetheless, this thesis provides valuable information that will need to be taken into consideration when writing future international guidelines and standards related to ELF-MF

    Vision and Driving after Stroke

    Get PDF
    Driving a car is often an essential part of maintaining mobility and quality of life, but after a stroke many are forced to cease driving. Homonymous visual field defects (HVFDs) and unilateral spatial neglect (USN) are common sequelae of stroke. For people with HVFDs a legal threshold for extent of field loss exists beyond which a person is not allowed to drive, and most people with clinically detectable USN are also censured from driving. However, some people with HVFDs have been deemed safe to drive, and some with USN have shown normal performance on other skilled visuo-motor tasks. It seems that there is great variation in abilities across individuals with HVFDs and USN, and driving performance cannot be predicted from simple measures such as extent of visual field loss. Several studies have suggested that compensatory eye-movement strategies (particularly saccades into the affected visual field) may be linked with functional improvements post-stroke. This thesis investigates whether eye-movement behaviours are important for stroke patients performing skilled actions such as driving. To test this theory 18 people with HVFDs and/or USN following a stroke and 18 older adult controls were recruited. A series of behavioural measures were taken using a battery of tests: Cognitive and visuospatial measures from classic pen and paper tasks and visual field mapping, saccadic and smooth pursuit accuracy, visual search, simulated steering and simulated hazard perception measures. Across these measures there was a consistent theme that impairments to perception-action functions varied considerably across participants with stroke, but that some individuals were able to function remarkably well. Compensatory eye movement patterns were observed in many, and driving performance was predicted to some extent by saccadic accuracy and visual search performance. The implications are discussed with respect to using eye-movements as a potential target for rehabilitation treatment
    corecore