80 research outputs found

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Quiet Eye research – Joan Vickers on target

    Get PDF
    In this target article (TA; CISS2016_100), Joan Vickers gives an overview of 20 years of research on her discovery that a relatively long lasting fixation before movement initiation enhances complex-motor performance, the so-called Quiet Eye (QE) phenomenon. Vickers’ main article (CISS2016_101) is the focus of sixteen peer commentaries (CISS2016_102 – CISS2016_117), authored by sport scientists with a special focus on the QE (Causer; Farrow & Panchuk; Klostermann, Vater & Kredel; Mann, Wright & Janelle; Schorer, Tirp & Rienhoff; Williams; Wilson, Wood & Vine), by sport scientists with different research foci (Baker & Wattie; Davids & Araujo; Frank & Schack; Helsen, Levin, Ziv & Davare; Rodrigues & Navarro), and by experts in human perception from disciplines beyond sport science (Foulsham; Gegenfurtner & Szulewski; Spering & Schütz; Watson & Enns). Finally, critiques, suggestions, and extensions brought forward by the commentators are acknowledged by Vickers in her closing response (CISS2016_118)

    Dynamics of spatial attention during motion tracking: Characterization and modeling as a function of motion predictability

    Get PDF
    Efficient information processing in ecological environments relies on spatial attention to selectively process relevant areas in the visual field. Attention has been shown to be biased ahead of simple, uniform target motion during smooth pursuit. However, real-world motion varies in predictability, and as such this study aimed to: a) determine how motion predictability affects attentional bias, b) characterize how visual attention adapts to changes in motion predictability, and c) implement a computational model of visual attention during motion tracking. Ten high-performance team sport athletes (5 male, 5 female) and ten healthy, young adults (5 male, 5 female) visually tracked a target moving at varying predictability levels. A probe was flashed ahead or behind target motion (2° or 6°), and manual response times (MRT) to probes were collected to indicate attention level. To investigate the temporal dynamics of attentional bias, a second tracking task was performed where the target changed predictability levels mid-trial. The effects of group, motion predictability, and probe distance, time & location on MRT bias were examined. Finally, a state-space model (input: target motion, output: attentional bias) was trained and tested on the motion tracking and MRT data using a 5-fold cross-validation. MRT were significantly biased in athletes (distance=2°) and adults (distance=2°,6°) during predictable motion (p<0.01). There was no MRT bias for semi- and un-predictable motions. Furthermore, MRT bias took longer to accumulate, than it did to de-accumulate (p<0.01). Eye movements showed that catch-up saccades were larger (p<0.01) and more frequent (p<0.01) during unpredictable motion phases, and gradually reduced in size and frequency during sustained predictable motion. Cross-validation results demonstrated that the state-space model performance in predicting attentional bias had a mean absolute error of 18.6% (SD=0.04%). In conclusion, the distribution of spatial attention during motion tracking is dependant on motion predictability, and the accumulation of bias ahead of target motion takes longer than de-accumulation. These results indicate a conservative attentional allocation scheme that introduces bias based on predicted future errors in motion extrapolation. The state-space model developed based on these experimental results may extend existing dynamic saliency frameworks to factor in the effects of motion tracking on spatial attention

    Scalable Machine Learning Methods for Massive Biomedical Data Analysis.

    Full text link
    Modern data acquisition techniques have enabled biomedical researchers to collect and analyze datasets of substantial size and complexity. The massive size of these datasets allows us to comprehensively study the biological system of interest at an unprecedented level of detail, which may lead to the discovery of clinically relevant biomarkers. Nonetheless, the dimensionality of these datasets presents critical computational and statistical challenges, as traditional statistical methods break down when the number of predictors dominates the number of observations, a setting frequently encountered in biomedical data analysis. This difficulty is compounded by the fact that biological data tend to be noisy and often possess complex correlation patterns among the predictors. The central goal of this dissertation is to develop a computationally tractable machine learning framework that allows us to extract scientifically meaningful information from these massive and highly complex biomedical datasets. We motivate the scope of our study by considering two important problems with clinical relevance: (1) uncertainty analysis for biomedical image registration, and (2) psychiatric disease prediction based on functional connectomes, which are high dimensional correlation maps generated from resting state functional MRI.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111354/1/takanori_1.pd

    A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks

    Get PDF
    Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs

    Hierarchical neural control of human postural balance and bipedal walking in sagittal plane

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 177-192).The cerebrocerebellar system has been known to be a central part in human motion control and execution. However, engineering descriptions of the system, especially in relation to lower body motion, have been very limited. This thesis proposes an integrated hierarchical neural model of sagittal planar human postural balance and biped walking to 1) investigate an explicit mechanism of the cerebrocerebellar and other related neural systems, 2) explain the principles of human postural balancing and biped walking control in terms of the central nervous systems, and 3) provide a biologically inspired framework for the design of humanoid or other biomorphic robot locomotion. The modeling was designed to confirm neurophysiological plausibility and achieve practical simplicity as well. The combination of scheduled long-loop proprioceptive and force feedback represents the cerebrocerebellar system to implement postural balance strategies despite the presence of signal transmission delays and phase lags. The model demonstrates that the postural control can be substantially linear within regions of the kinematic state-space with switching driven by sensed variables.(cont.) A improved and simplified version of the cerebrocerebellar system is combined with the spinal pattern generation to account for human nominal walking and various robustness tasks. The synergy organization of the spinal pattern generation simplifies control of joint actuation. The substantial decoupling of the various neural circuits facilitates generation of modulated behaviors. This thesis suggests that kinematic control with no explicit internal model of body dynamics may be sufficient for those lower body motion tasks and play a common role in postural balance and walking. All simulated performances are evaluated with respect to actual observations of kinematics, electromyogram, etc.by Sungho JoPh.D
    • …
    corecore