998 research outputs found

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Cognitive Mechanisms of Transsaccadic Perception

    Get PDF
    Transsaccadic perception is characterised as the ability to perceive our visual world as stable and unmoving, despite the retinal image of our visual world changing each time we make rapid eye movements (called saccades). Currently the underlying mechanisms of transsaccadic perception, specifically the mechanisms that maintain an updated internal spatial map of objects in our environment during saccades, remain unclear. Although considerable progress has been made toward a better understanding of the basic mechanisms of transsaccadic perception with stationary objects in our environment, little is known about how our brain keeps track of moving objects during a saccade, which is a real-world task we do everyday (e.g. when driving or playing sports). In this thesis I describe two studies where I investigated transsaccadic perception of moving objects. The first examines how well we can track moving objects across saccades when the saccade amplitude and eccentricity of the target vary in a purely egocentric task. The second assess the extent to which we rely on visual cues in our environment (i.e. allocentric information) during transsaccadic motion tracking. My research is among the first to explore how our brain processes and integrates moving stimuli during saccades. Additionally, it sheds further light on the cognitive mechanisms of transsaccadic perception and offers insights into our everyday visual conscious experience

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Seuratun kappaleen poikkeuttaminen silmÀnrÀpÀysten aikana: kÀyttÀytymis- ja neuromagneettisia havaintoja

    Get PDF
    The visual world is perceived as continuous despite frequent interruptions of sensory data due to eyeblinks and rapid eye movements. To create the perception of constancy, the brain makes use of fill-in mechanisms. This study presents an experiment in which the location of an object during smooth pursuit tracking is altered during eyeblinks. The experiment investigates the effects of blink suppression and fill-in mechanisms to cloud the discrimination of these changes. We employed a motion-tracking task, which promotes the accurate evaluation of the object’s trajectory and thus can counteract the fill-in mechanisms. Six subjects took part in the experiment, during which they were asked to report any perceived anomalies in the trajectory. Eye movements were monitored with a video-based tracking and brain responses with simultaneous MEG recordings. Discrimination success was found to depend on the direction of the displacement, and was significantly modulated by prior knowledge of the triggered effect. Eye-movement data were congruent with previous findings and revealed a smooth transition from blink recovery to object locating. MEG recordings were analysed for condition-dependent evoked and induced responses; however, intersubject variability was too large for drawing clear conclusions regarding the brain basis of the fill-in mechanisms.Visuaalinen maailma koetaan jatkuvana, vaikka silmĂ€nrĂ€pĂ€ykset ja nopeat silmĂ€nliikkeet aiheuttavat keskeytyksiĂ€ sensoriseen tiedonkeruuseen. Luodakseen kĂ€sityksen pysyvyydestĂ€, aivot kĂ€yttĂ€vĂ€t tĂ€yttömekanismeja. TĂ€mĂ€ tutkimus esittelee kokeen, jossa kappaleen seurantaa hitailla seurantaliikkeillĂ€ hĂ€iritÀÀn muuttamalla sen sijaintia silmĂ€nrĂ€pĂ€ysten aikana. TĂ€mĂ€ koe tutkii, kuinka silmĂ€nrĂ€pĂ€ysten aiheuttama suppressio ja tĂ€yttömekanismit sumentavat kykyĂ€ erotella nĂ€itĂ€ muutoksia. KĂ€ytimme liikeseurantatehtĂ€vÀÀ, joka vastaavasti edistÀÀ kappaleen liikeradan tarkkaa arviointia. Kuusi koehenkilöÀ osallistui kokeeseen, jonka aikana heitĂ€ pyydettiin ilmoittamaan kaikki havaitut poikkeamat kappaleen liikeradassa. SilmĂ€nliikkeitĂ€ tallennettiin videopohjaisella seurannalla, ja aivovasteita yhtĂ€aikaisella MEG:llĂ€. Erottelykyvyn todettiin riippuvan poikkeutuksen suunnasta, sekĂ€ merkittĂ€vĂ€sti a priori tiedosta poikkeutusten esiintymistavasta. SilmĂ€nliikedata oli yhtenevÀÀ aiempien tutkimusten kanssa, ja paljasti sujuvan siirtymisen silmĂ€nrĂ€pĂ€yksistĂ€ palautumisesta kappaleen paikallistamiseen. MEG-tallenteet analysoitiin ehdollisten herĂ€te- ja indusoitujen vasteiden löytĂ€miseksi, mutta yksilölliset vaste-erot koehenkilöiden vĂ€lillĂ€ olivat liian suuria selkeiden johtopÀÀtösten tekemiseksi tĂ€yttömekanismien aivoperustasta

    Direction of Apparent Motion During Smooth Pursuit Is Determined Using a Mixture of Retinal and Objective Proximities

    Get PDF
    Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object

    A Real-Time Unsupervised Neural Network for the Low-Level Control of a Mobile Robot in a Nonstationary Environment

    Full text link
    This article introduces a real-time, unsupervised neural network that learns to control a two-degree-of-freedom mobile robot in a nonstationary environment. The neural controller, which is termed neural NETwork MObile Robot Controller (NETMORC), combines associative learning and Vector Associative Map (YAM) learning to generate transformations between spatial and velocity coordinates. As a result, the controller learns the wheel velocities required to reach a target at an arbitrary distance and angle. The transformations are learned during an unsupervised training phase, during which the robot moves as a result of randomly selected wheel velocities. The robot learns the relationship between these velocities and the resulting incremental movements. Aside form being able to reach stationary or moving targets, the NETMORC structure also enables the robot to perform successfully in spite of disturbances in the enviroment, such as wheel slippage, or changes in the robot's plant, including changes in wheel radius, changes in inter-wheel distance, or changes in the internal time step of the system. Finally, the controller is extended to include a module that learns an internal odometric transformation, allowing the robot to reach targets when visual input is sporadic or unreliable.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499

    Chasing control in male blowflies : behavioural performance and neuronal responses

    Get PDF
    Trischler C. Chasing control in male blowflies : behavioural performance and neuronal responses. Bielefeld (Germany): Bielefeld University; 2008

    Researcher's guide to the NASA Ames Flight Simulator for Advanced Aircraft (FSAA)

    Get PDF
    Performance, limitations, supporting software, and current checkout and operating procedures are presented for the flight simulator, in terms useful to the researcher who intends to use it. Suggestions to help the researcher prepare the experimental plan are also given. The FSAA's central computer, cockpit, and visual and motion systems are addressed individually but their interaction is considered as well. Data required, available options, user responsibilities, and occupancy procedures are given in a form that facilitates the initial communication required with the NASA operations' group

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze
    • 

    corecore