19 research outputs found
Does Time Really Slow Down during a Frightening Event?
Observers commonly report that time seems to have moved in slow motion during a life-threatening event. It is unknown whether this is a function of increased time resolution during the event, or instead an illusion of remembering an emotionally salient event. Using a hand-held device to measure speed of visual perception, participants experienced free fall for 31 m before landing safely in a net. We found no evidence of increased temporal resolution, in apparent conflict with the fact that participants retrospectively estimated their own fall to last 36% longer than others' falls. The duration dilation during a frightening event, and the lack of concomitant increase in temporal resolution, indicate that subjective time is not a single entity that speeds or slows, but instead is composed of separable subcomponents. Our findings suggest that time-slowing is a function of recollection, not perception: 1a richer encoding of memory may cause a salient event to appear, retrospectively, as though it lasted longer
Computational Models of Timing Mechanisms in the Cerebellar Granular Layer
A long-standing question in neuroscience is how the brain controls movement that requires precisely timed muscle activations. Studies using Pavlovian delay eyeblink conditioning provide good insight into this question. In delay eyeblink conditioning, which is believed to involve the cerebellum, a subject learns an interstimulus interval (ISI) between the onsets of a conditioned stimulus (CS) such as a tone and an unconditioned stimulus such as an airpuff to the eye. After a conditioning phase, the subject’s eyes automatically close or blink when the ISI time has passed after CS onset. This timing information is thought to be represented in some way in the cerebellum. Several computational models of the cerebellum have been proposed to explain the mechanisms of time representation, and they commonly point to the granular layer network. This article will review these computational models and discuss the possible computational power of the cerebellum
Stimulus-Dependent State Transition between Synchronized Oscillation and Randomly Repetitive Burst in a Model Cerebellar Granular Layer
Information processing of the cerebellar granular layer composed of granule and Golgi cells is regarded as an important first step toward the cerebellar computation. Our previous theoretical studies have shown that granule cells can exhibit random alternation between burst and silent modes, which provides a basis of population representation of the passage-of-time (POT) from the onset of external input stimuli. On the other hand, another computational study has reported that granule cells can exhibit synchronized oscillation of activity, as consistent with observed oscillation in local field potential recorded from the granular layer while animals keep still. Here we have a question of whether an identical network model can explain these distinct dynamics. In the present study, we carried out computer simulations based on a spiking network model of the granular layer varying two parameters: the strength of a current injected to granule cells and the concentration of Mg2+ which controls the conductance of NMDA channels assumed on the Golgi cell dendrites. The simulations showed that cells in the granular layer can switch activity states between synchronized oscillation and random burst-silent alternation depending on the two parameters. For higher Mg2+ concentration and a weaker injected current, granule and Golgi cells elicited spikes synchronously (synchronized oscillation state). In contrast, for lower Mg2+ concentration and a stronger injected current, those cells showed the random burst-silent alternation (POT-representing state). It is suggested that NMDA channels on the Golgi cell dendrites play an important role for determining how the granular layer works in response to external input
A Computational Mechanism for Unified Gain and Timing Control in the Cerebellum
Precise gain and timing control is the goal of cerebellar motor learning. Because the basic neural circuitry of the cerebellum is homogeneous throughout the cerebellar cortex, a single computational mechanism may be used for simultaneous gain and timing control. Although many computational models of the cerebellum have been proposed for either gain or timing control, few models have aimed to unify them. In this paper, we hypothesize that gain and timing control can be unified by learning of the complete waveform of the desired movement profile instructed by climbing fiber signals. To justify our hypothesis, we adopted a large-scale spiking network model of the cerebellum, which was originally developed for cerebellar timing mechanisms to explain the experimental data of Pavlovian delay eyeblink conditioning, to the gain adaptation of optokinetic response (OKR) eye movements. By conducting large-scale computer simulations, we could reproduce some features of OKR adaptation, such as the learning-related change of simple spike firing of model Purkinje cells and vestibular nuclear neurons, simulated gain increase, and frequency-dependent gain increase. These results suggest that the cerebellum may use a single computational mechanism to control gain and timing simultaneously
Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task: a closed-loop robotic simulation
The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.This work was supported by grants from the European Union, Egidio D'Angelo and Eduardo Ros (CEREBNET FP7-ITN238686, REALNET FP7-ICT270434) and by grants from the Italian Ministry of Health to Egidio D'Angelo (RF-2009-1475845) and the Spanish Regional Government, Niceto R. Luque (PYR-2014-16). We thank G. Ferrari and M. Rossin for their technical support
Spike burst-pause dynamics of Purkinje cells regulate sensorimotor adaptation
Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains
unclear how oculomotor adaptation depends on the interplay between the characteristic
Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking
cerebellar model assesses the role of Purkinje cell firing patterns in vestibular ocular reflex
(VOR) adaptation. The model captures the cerebellar microcircuit properties and it incorporates
spike-based synaptic plasticity at multiple cerebellar sites. A detailed Purkinje cell
model reproduces the three spike-firing patterns that are shown to regulate the cerebellar
output. Our results suggest that pauses following Purkinje complex spikes (bursts) encode
transient disinhibition of target medial vestibular nuclei, critically gating the vestibular signals
conveyed by mossy fibres. This gating mechanism accounts for early and coarse VOR
acquisition, prior to the late reflex consolidation. In addition, properly timed and sized Purkinje
cell bursts allow the ratio between long-term depression and potentiation (LTD/LTP) to
be finely shaped at mossy fibre-medial vestibular nuclei synapses, which optimises VOR
consolidation. Tonic Purkinje cell firing maintains the consolidated VOR through time.
Importantly, pauses are crucial to facilitate VOR phase-reversal learning, by reshaping previously
learnt synaptic weight distributions. Altogether, these results predict that Purkinje
spike burst-pause dynamics are instrumental to VOR learning and reversal adaptation.This work was supported by the
European Union (www.europa.eu), Project SpikeControl 658479 (recipient NL), the Spanish
Agencia Estatal de Investigacio´n and European
Regional Development Fund (www.ciencia.gob.es/
portal/site/MICINN/aei), Project CEREBROT
TIN2016-81041-R (recipient ER), and the French
National Research Agency (www.agence-nationalerecherche.
fr) – Essilor International (www.essilor.
com), Chair SilverSight ANR-14-CHIN-0001
(recipient AA)
Towards a Bio-Inspired Real-Time Neuromorphic Cerebellum
From Frontiers via Jisc Publications RouterHistory: received 2020-10-29, collection 2021, accepted 2021-03-24, epub 2021-05-31Publication status: PublishedThis work presents the first simulation of a large-scale, bio-physically constrained cerebellum model performed on neuromorphic hardware. A model containing 97,000 neurons and 4.2 million synapses is simulated on the SpiNNaker neuromorphic system. Results are validated against a baseline simulation of the same model executed with NEST, a popular spiking neural network simulator using generic computational resources and double precision floating point arithmetic. Individual cell and network-level spiking activity is validated in terms of average spike rates, relative lead or lag of spike times, and membrane potential dynamics of individual neurons, and SpiNNaker is shown to produce results in agreement with NEST. Once validated, the model is used to investigate how to accelerate the simulation speed of the network on the SpiNNaker system, with the future goal of creating a real-time neuromorphic cerebellum. Through detailed communication profiling, peak network activity is identified as one of the main challenges for simulation speed-up. Propagation of spiking activity through the network is measured, and will inform the future development of accelerated execution strategies for cerebellum models on neuromorphic hardware. The large ratio of granule cells to other cell types in the model results in high levels of activity converging onto few cells, with those cells having relatively larger time costs associated with the processing of communication. Organizing cells on SpiNNaker in accordance with their spatial position is shown to reduce the peak communication load by 41%. It is hoped that these insights, together with alternative parallelization strategies, will pave the way for real-time execution of large-scale, bio-physically constrained cerebellum models on SpiNNaker. This in turn will enable exploration of cerebellum-inspired controllers for neurorobotic applications, and execution of extended duration simulations over timescales that would currently be prohibitive using conventional computational platforms
At the Edge of Chaos: How Cerebellar Granular Layer Network Dynamics Can Provide the Basis for Temporal Filters.
Models of the cerebellar microcircuit often assume that input signals from the mossy-fibers are expanded and recoded to provide a foundation from which the Purkinje cells can synthesize output filters to implement specific input-signal transformations. Details of this process are however unclear. While previous work has shown that recurrent granule cell inhibition could in principle generate a wide variety of random outputs suitable for coding signal onsets, the more general application for temporally varying signals has yet to be demonstrated. Here we show for the first time that using a mechanism very similar to reservoir computing enables random neuronal networks in the granule cell layer to provide the necessary signal separation and extension from which Purkinje cells could construct basis filters of various time-constants. The main requirement for this is that the network operates in a state of criticality close to the edge of random chaotic behavior. We further show that the lack of recurrent excitation in the granular layer as commonly required in traditional reservoir networks can be circumvented by considering other inherent granular layer features such as inverted input signals or mGluR2 inhibition of Golgi cells. Other properties that facilitate filter construction are direct mossy fiber excitation of Golgi cells, variability of synaptic weights or input signals and output-feedback via the nucleocortical pathway. Our findings are well supported by previous experimental and theoretical work and will help to bridge the gap between system-level models and detailed models of the granular layer network
Digital neural circuits : from ions to networks
PhD ThesisThe biological neural computational mechanism is always fascinating to human beings since it shows several state-of-the-art characteristics: strong fault tolerance, high power efficiency and self-learning capability. These behaviours lead the developing trend of designing the next-generation digital computation platform. Thus investigating and understanding how the neurons talk with each other is the key to replicating these calculation features. In this work I emphasize using tailor-designed digital circuits for exactly implementing bio-realistic neural network behaviours, which can be considered a novel approach to cognitive neural computation. The first advance is that biological real-time computing performances allow the presented circuits to be readily adapted for real-time closed-loop in vitro or in vivo experiments, and the second one is a transistor-based circuit that can be directly translated into an impalpable chip for high-level neurologic disorder rehabilitations. In terms of the methodology, first I focus on designing a heterogeneous or multiple-layer-based architecture for reproducing the finest neuron activities both in voltage-and calcium-dependent ion channels. In particular, a digital optoelectronic neuron is developed as a case study. Second, I focus on designing a network-on-chip architecture for implementing a very large-scale neural network (e.g. more than 100,000) with human cognitive functions (e.g. timing control mechanism). Finally, I present a reliable hybrid bio-silicon closed-loop system for central pattern generator prosthetics, which can be considered as a framework for digital neural circuit-based neuro-prosthesis implications. At the end, I present the general digital neural circuit design principles and the long-term social impacts of the presented work