194 research outputs found
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
A synaptic learning rule for exploiting nonlinear dendritic computation
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons
Dynamics and Synchronization in Neuronal Models
La tesis está principalmente dedicada al modelado y simulación de sistemas neuronales. Entre otros aspectos se investiga el papel del ruido cuando actua sobre neuronas. El fenómeno de resonancia estocástica es caracterizado tanto a nivel teórico como reportado experimentalmente en un conjunto de neuronas del sistema motor. También se estudia el papel que juega la heterogeneidad en un conjunto de neuronas acopladas demostrando que la heterogeneidad en algunos parámetros de las neuronas puede mejorar la respuesta del sistema a una modulación periódica externa. También estudiamos del efecto de la topología y el retraso en las conexiones en una red neuronal. Se explora como las propiedades topológicas y los retrasos en la conducción de diferentes clases de redes afectan la capacidad de las neuronas para establecer una relación temporal bien definida mediante sus potenciales de acción. En particular, el concepto de consistencia se introduce y estudia en una red neuronal cuando plasticidad neuronal es tenida en cuenta entre las conexiones de la re
Neural Field Models: A mathematical overview and unifying framework
Rhythmic electrical activity in the brain emerges from regular non-trivial
interactions between millions of neurons. Neurons are intricate cellular
structures that transmit excitatory (or inhibitory) signals to other neurons,
often non-locally, depending on the graded input from other neurons. Often this
requires extensive detail to model mathematically, which poses several issues
in modelling large systems beyond clusters of neurons, such as the whole brain.
Approaching large populations of neurons with interconnected constituent
single-neuron models results in an accumulation of exponentially many
complexities, rendering a realistic simulation that does not permit
mathematical tractability and obfuscates the primary interactions required for
emergent electrodynamical patterns in brain rhythms. A statistical mechanics
approach with non-local interactions may circumvent these issues while
maintaining mathematically tractability. Neural field theory is a
population-level approach to modelling large sections of neural tissue based on
these principles. Herein we provide a review of key stages of the history and
development of neural field theory and contemporary uses of this branch of
mathematical neuroscience. We elucidate a mathematical framework in which
neural field models can be derived, highlighting the many significant inherited
assumptions that exist in the current literature, so that their validity may be
considered in light of further developments in both mathematical and
experimental neuroscience.Comment: 55 pages, 10 figures, 2 table
DEVELOPMENT OF A CEREBELLAR MEAN FIELD MODEL: THE THEORETICAL FRAMEWORK, THE IMPLEMENTATION AND THE FIRST APPLICATION
Brain modeling constantly evolves to improve the accuracy of the simulated brain dynamics with the ambitious aim to build a digital twin of the brain. Specific models tuned on brain regions specific features empower the brain simulations introducing bottom-up physiology properties into data-driven simulators. Despite the cerebellum contains 80 % of the neurons and is deeply involved in a wide range of functions, from sensorimotor to cognitive ones, a specific cerebellar model is still missing. Furthermore, its quasi-crystalline multi-layer circuitry deeply differs from the cerebral cortical one, therefore is hard to imagine a unique general model suitable for the realistic simulation of both cerebellar and cerebral cortex.
The present thesis tackles the challenge of developing a specific model for the cerebellum. Specifically, multi-neuron multi-layer mean field (MF) model of the cerebellar network, including Granule Cells, Golgi Cells, Molecular Layer Interneurons, and Purkinje Cells, was implemented, and validated against experimental data and the corresponding spiking neural network microcircuit model. The cerebellar MF model was built using a system of interdependent equations, where the single neuronal populations and topological parameters were captured by neuron-specific inter- dependent Transfer Functions. The model time resolution was optimized using Local Field Potentials recorded experimentally with high-density multielectrode array from acute mouse cerebellar slices. The present MF model satisfactorily captured the average discharge of different microcircuit neuronal populations in response to various input patterns and was able to predict the changes in Purkinje Cells firing patterns occurring in specific behavioral conditions: cortical plasticity mapping, which drives learning in associative tasks, and Molecular Layer Interneurons feed-forward inhibition, which controls Purkinje Cells activity patterns.
The cerebellar multi-layer MF model thus provides a computationally efficient tool that will allow to investigate the causal relationship between microscopic neuronal properties and ensemble brain activity in health and pathological conditions. Furthermore, preliminary attempts to simulate a pathological cerebellum were done in the perspective of introducing our multi-layer cerebellar MF model in whole-brain simulators to realize patient-specific treatments, moving ahead towards personalized medicine. Two preliminary works assessed the relevant impact of the cerebellum on whole-brain dynamics and its role in modulating complex responses in causal connected cerebral regions, confirming that a specific model is required to further investigate the cerebellum-on- cerebrum influence.
The framework presented in this thesis allows to develop a multi-layer MF model depicting the features of a specific brain region (e.g., cerebellum, basal ganglia), in order to define a general strategy to build up a pool of biology grounded MF models for computationally feasible simulations. Interconnected bottom-up MF models integrated in large-scale simulators would capture specific features of different brain regions, while the applications of a virtual brain would have a substantial impact on the reality ranging from the characterization of neurobiological processes, subject-specific preoperative plans, and development of neuro-prosthetic devices
A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence
This review aims to contribute to the quest for artificial general
intelligence by examining neuroscience and cognitive psychology methods for
potential inspiration. Despite the impressive advancements achieved by deep
learning models in various domains, they still have shortcomings in abstract
reasoning and causal understanding. Such capabilities should be ultimately
integrated into artificial intelligence systems in order to surpass data-driven
limitations and support decision making in a way more similar to human
intelligence. This work is a vertical review that attempts a wide-ranging
exploration of brain function, spanning from lower-level biological neurons,
spiking neural networks, and neuronal ensembles to higher-level concepts such
as brain anatomy, vector symbolic architectures, cognitive and categorization
models, and cognitive architectures. The hope is that these concepts may offer
insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
- …