35,375 research outputs found
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624
Learning to Personalize in Appearance-Based Gaze Tracking
Personal variations severely limit the performance of appearance-based gaze
tracking. Adapting to these variations using standard neural network model
adaptation methods is difficult. The problems range from overfitting, due to
small amounts of training data, to underfitting, due to restrictive model
architectures. We tackle these problems by introducing the SPatial Adaptive
GaZe Estimator (SPAZE). By modeling personal variations as a low-dimensional
latent parameter space, SPAZE provides just enough adaptability to capture the
range of personal variations without being prone to overfitting. Calibrating
SPAZE for a new person reduces to solving a small optimization problem. SPAZE
achieves an error of 2.70 degrees with 9 calibration samples on MPIIGaze,
improving on the state-of-the-art by 14 %. We contribute to gaze tracking
research by empirically showing that personal variations are well-modeled as a
3-dimensional latent parameter space for each eye. We show that this
low-dimensionality is expected by examining model-based approaches to gaze
tracking. We also show that accurate head pose-free gaze tracking is possible
Quantum Brain: A Recurrent Quantum Neural Network Model to Describe Eye Tracking of Moving Targets
A theoretical quantum brain model is proposed using a nonlinear Schroedinger
wave equation. The model proposes that there exists a quantum process that
mediates the collective response of a neural lattice (classical brain). The
model is used to explain eye movements when tracking moving targets. Using a
Recurrent Quantum Neural Network(RQNN) while simulating the quantum brain
model, two very interesting phenomena are observed. First, as eye sensor data
is processed in a classical brain, a wave packet is triggered in the quantum
brain. This wave packet moves like a particle. Second, when the eye tracks a
fixed target, this wave packet moves not in a continuous but rather in a
discrete mode. This result reminds one of the saccadic movements of the eye
consisting of 'jumps' and 'rests'. However, such a saccadic movement is
intertwined with smooth pursuit movements when the eye has to track a dynamic
trajectory. In a sense, this is the first theoretical model explaining the
experimental observation reported concerning eye movements in a static scene
situation. The resulting prediction is found to be very precise and efficient
in comparison to classical objective modeling schemes such as the Kalman
filter.Comment: 7 pages, 7 figures submitted to Physical Review Letter
Visual attention deficits in schizophrenia can arise from inhibitory dysfunction in thalamus or cortex
Schizophrenia is associated with diverse cognitive deficits, including disorders of attention-related oculomotor behavior. At the structural level, schizophrenia is associated with abnormal inhibitory control in the circuit linking cortex and thalamus. We developed a spiking neural network model that demonstrates how dysfunctional inhibition can degrade attentive gaze control. Our model revealed that perturbations of two functionally distinct classes of cortical inhibitory neurons, or of the inhibitory thalamic reticular nucleus, disrupted processing vital for sustained attention to a stimulus, leading to distractibility. Because perturbation at each circuit node led to comparable but qualitatively distinct disruptions in attentive tracking or fixation, our findings support the search for new eye movement metrics that may index distinct underlying neural defects. Moreover, because the cortico-thalamic circuit is a common motif across sensory, association, and motor systems, the model and extensions can be broadly applied to study normal function and the neural bases of other cognitive deficits in schizophrenia.R01 MH057414 - NIMH NIH HHS; R01 MH101209 - NIMH NIH HHS; R01 NS024760 - NINDS NIH HHSPublished versio
Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex
How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852
State Dependence of Stimulus-Induced Variability Tuning in Macaque MT
Behavioral states marked by varying levels of arousal and attention modulate
some properties of cortical responses (e.g. average firing rates or pairwise
correlations), yet it is not fully understood what drives these response
changes and how they might affect downstream stimulus decoding. Here we show
that changes in state modulate the tuning of response variance-to-mean ratios
(Fano factors) in a fashion that is neither predicted by a Poisson spiking
model nor changes in the mean firing rate, with a substantial effect on
stimulus discriminability. We recorded motion-sensitive neurons in middle
temporal cortex (MT) in two states: alert fixation and light, opioid
anesthesia. Anesthesia tended to lower average spike counts, without decreasing
trial-to-trial variability compared to the alert state. Under anesthesia,
within-trial fluctuations in excitability were correlated over longer time
scales compared to the alert state, creating supra-Poisson Fano factors. In
contrast, alert-state MT neurons have higher mean firing rates and largely
sub-Poisson variability that is stimulus-dependent and cannot be explained by
firing rate differences alone. The absence of such stimulus-induced variability
tuning in the anesthetized state suggests different sources of variability
between states. A simple model explains state-dependent shifts in the
distribution of observed Fano factors via a suppression in the variance of gain
fluctuations in the alert state. A population model with stimulus-induced
variability tuning and behaviorally constrained information-limiting
correlations explores the potential enhancement in stimulus discriminability by
the cortical population in the alert state.Comment: 36 pages, 18 figure
- …