1,413 research outputs found
Multisensory information facilitates reaction speed by enlarging activity difference between superior colliculus hemispheres in rats
Animals can make faster behavioral responses to multisensory stimuli than to unisensory stimuli. The superior colliculus (SC), which receives multiple inputs from different sensory modalities, is considered to be involved in the initiation of motor responses. However, the mechanism by which multisensory information facilitates motor responses is not yet understood. Here, we demonstrate that multisensory information modulates competition among SC neurons to elicit faster responses. We conducted multiunit recordings from the SC of rats performing a two-alternative spatial discrimination task using auditory and/or visual stimuli. We found that a large population of SC neurons showed direction-selective activity before the onset of movement in response to the stimuli irrespective of stimulation modality. Trial-by-trial correlation analysis showed that the premovement activity of many SC neurons increased with faster reaction speed for the contraversive movement, whereas the premovement activity of another population of neurons decreased with faster reaction speed for the ipsiversive movement. When visual and auditory stimuli were presented simultaneously, the premovement activity of a population of neurons for the contraversive movement was enhanced, whereas the premovement activity of another population of neurons for the ipsiversive movement was depressed. Unilateral inactivation of SC using muscimol prolonged reaction times of contraversive movements, but it shortened those of ipsiversive movements. These findings suggest that the difference in activity between the SC hemispheres regulates the reaction speed of motor responses, and multisensory information enlarges the activity difference resulting in faster responses
Dopaminergic and Non-Dopaminergic Value Systems in Conditioning and Outcome-Specific Revaluation
Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952, R01-DC007683); National Science Foundation (IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-01-1-0624
Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
We present a new back propagation based training algorithm for discrete-time
spiking neural networks (SNN). Inspired by recent deep learning algorithms on
binarized neural networks, binary activation with a straight-through gradient
estimator is used to model the leaky integrate-fire spiking neuron, overcoming
the difficulty in training SNNs using back propagation. Two SNN training
algorithms are proposed: (1) SNN with discontinuous integration, which is
suitable for rate-coded input spikes, and (2) SNN with continuous integration,
which is more general and can handle input spikes with temporal information.
Neuromorphic hardware designed in 40nm CMOS exploits the spike sparsity and
demonstrates high classification accuracy (>98% on MNIST) and low energy
(48.4-773 nJ/image).Comment: 2017 IEEE Biomedical Circuits and Systems (BioCAS
High frequency oscillations as a correlate of visual perception
“NOTICE: this is the author’s version of a work that was accepted for publication in International journal of psychophysiology. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International journal of psychophysiology , 79, 1, (2011) DOI 10.1016/j.ijpsycho.2010.07.004Peer reviewedPostprin
Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions
A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.National Science Foundation (SBE-0354378; IIS-97-20333); Office of Naval Research (N00014-01-1-0624); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952
Active Perception with Dynamic Vision Sensors. Minimum Saccades with Optimum Recognition
Vision processing with Dynamic Vision Sensors
(DVS) is becoming increasingly popular. This type of bio-inspired
vision sensor does not record static scenes. DVS pixel activity
relies on changes in light intensity. In this paper, we introduce
a platform for object recognition with a DVS in which the
sensor is installed on a moving pan-tilt unit in closed-loop with
a recognition neural network. This neural network is trained
to recognize objects observed by a DVS while the pan-tilt unit
is moved to emulate micro-saccades. We show that performing
more saccades in different directions can result in having more
information about the object and therefore more accurate object
recognition is possible. However, in high performance and low latency
platforms, performing additional saccades adds additional
latency and power consumption. Here we show that the number
of saccades can be reduced while keeping the same recognition
accuracy by performing intelligent saccadic movements, in a
closed action-perception smart loop. We propose an algorithm
for smart saccadic movement decisions that can reduce the
number of necessary saccades to half, on average, for a predefined
accuracy on the N-MNIST dataset. Additionally, we show that
by replacing this control algorithm with an Artificial Neural
Network that learns to control the saccades, we can also reduce
to half the average number of saccades needed for N-MNIST
recognition.EU H2020 grant 644096 ECOMODEEU H2020 grant 687299 NEURAM3Ministry of Economy and Competitivity (Spain) / European Regional Development Fund TEC2015-63884-C2-1-P (COGNET
The power of the feed-forward sweep
Vision is fast and efficient. A novel natural scene can be categorized (e.g. does
it contain an animal, a vehicle?) by human observers in less than 150 ms, and
with minimal attentional resources. This ability still holds under strong
backward masking conditions. In fact, with a stimulus onset asynchrony of about
30 ms (the time between the scene and mask onset), the first 30 ms of selective
behavioral responses are essentially unaffected by the presence of the mask,
suggesting that this type of “ultra-rapid” processing can rely on a sequence of
swift feed-forward stages, in which the mask information never “catches up” with
the scene information. Simulations show that the feed-forward propagation of the
first wave of spikes generated at stimulus onset may indeed suffice for crude
re-cognition or categorization. Scene awareness, however, may take significantly
more time to develop, and probably requires feed-back processes. The main
implication of these results for theories of masking is that pattern or
metacontrast (backward) masking do not appear to bar the progression of visual
information at a low level. These ideas bear interesting similarities to
existing conceptualizations of priming and masking, such as Direct Parameter
Specification or the Rapid Chase theory
- …