142 research outputs found

    Fitting Neuron Models to Spike Trains

    Get PDF
    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    Action selection in the rhythmic brain: The role of the basal ganglia and tremor.

    Get PDF
    Low-frequency oscillatory activity has been the target of extensive research both in cortical structures and in the basal ganglia (BG), due to numerous reports of associations with brain disorders and the normal functioning of the brain. Additionally, a plethora of evidence and theoretical work indicates that the BG might be the locus where conflicts between prospective actions are being resolved. Whereas a number of computational models of the BG investigate these phenomena, these models tend to focus on intrinsic oscillatory mechanisms, neglecting evidence that points to the cortex as the origin of this oscillatory behaviour. In this thesis, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. To do so, we build a complete suite of computational tools for the design, optimization and simulation of spiking neural networks. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG, and it was used to make a number of biologically-plausible predictions. First, we investigate the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block) information flow in the three major BG pathways. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. Next, we explore the relationship of wave properties of entrained cortical inputs, dopamine and the transient effectiveness of the BG, when viewed as an action selection device. We found that cortical frequency, phase, dopamine and the examined time scale, all have a very important impact on the ability of our model to select. Our simulations resulted in a canonical profile of selectivity, which we termed selectivity portraits. Taking together, our results suggest that the cortex is the structure that determines whether action selection will be performed and what strategy will be utilized while the role of the BG is to perform this selection. Some frequency ranges promote the exploitation of actions of whom the outcome is known, others promote the exploration of new actions with high uncertainty while the remaining frequencies simply deactivate selection. Based on this behaviour, we propose a metaphor according to which, the basal ganglia can be viewed as the ''gearbox" of the cortex. Coalitions of rhythmic cortical areas are able to switch between a repertoire of available BG modes which, in turn, change the course of information flow back to and within the cortex. In the same context, dopamine can be likened to the ''control pedals" of action selection that either stop or initiate a decision. Finally, the frequency of active cortical areas that project to the BG acts as a gear lever, that instead of controlling the type and direction of thrust that the throttle provides to an automobile, it dictates the extent to which dopamine can trigger a decision, as well as what type of decision this will be. Finally, we identify a selection cycle with a period of around 200 ms, which was used to assess the biological plausibility of the most popular architectures in cognitive science. Using extensions of the BG model, we further propose novel mechanisms that provide explanations for (1) the two distinctive dynamical behaviours of neurons in globus pallidus external, and (2) the generation of resting tremor in Parkinson's disease. Our findings agree well with experimental observations, suggest new insights into the pathophysiology of specific BG disorders, provide new justifications for oscillatory phenomena related to decision making and reaffirm the role of the BG as the selection centre of the brain.Open Acces

    GPU-based implementation of real-time system for spiking neural networks

    Get PDF
    Real-time simulations of biological neural networks (BNNs) provide a natural platform for applications in a variety of fields: data classification and pattern recognition, prediction and estimation, signal processing, control and robotics, prosthetics, neurological and neuroscientific modeling. BNNs possess inherently parallel architecture and operate in continuous signal domain. Spiking neural networks (SNNs) are type of BNNs with reduced signal dynamic range: communication between neurons occurs by means of time-stamped events (spikes). SNNs allow reduction of algorithmic complexity and communication data size at a price of little loss in accuracy. Simulation of SNNs using traditional sequential computer architectures results in significant time penalty. This penalty prohibits application of SNNs in real-time systems. Graphical processing units (GPUs) are cost effective devices specifically designed to exploit parallel shared memory-based floating point operations applied not only to computer graphics, but also to scientific computations. This makes them an attractive solution for SNN simulation compared to that of FPGA, ASIC and cluster message passing computing systems. Successful implementations of GPU-based SNN simulations have been already reported. The contribution of this thesis is the development of a scalable GPU-based realtime system that provides initial framework for design and application of SNNs in various domains. The system delivers an interface that establishes communication with neurons in the network as well as visualizes the outcome produced by the network. Accuracy of the simulation is emphasized due to its importance in the systems that exploit spike time dependent plasticity, classical conditioning and learning. As a result, a small network of 3840 Izhikevich neurons implemented as a hybrid system with Parker-Sochacki numerical integration method achieves real time operation on GTX260 device. An application case study of the system modeling receptor layer of retina is reviewed

    Complex Dynamics in Dedicated / Multifunctional Neural Networks and Chaotic Nonlinear Systems

    Get PDF
    We study complex behaviors arising in neuroscience and other nonlinear systems by combining dynamical systems analysis with modern computational approaches including GPU parallelization and unsupervised machine learning. To gain insights into the behaviors of brain networks and complex central pattern generators (CPGs), it is important to understand the dynamical principles regulating individual neurons as well as the basic structural and functional building blocks of neural networks. In the first section, we discuss how symbolic methods can help us analyze neural dynamics such as bursting, tonic spiking and chaotic mixed-mode oscillations in various models of individual neurons, the bifurcations that underlie transitions between activity types, as well as emergent network phenomena through synergistic interactions seen in realistic neural circuits, such as network bursting from non-intrinsic bursters. The second section is focused on the origin and coexistence of multistable rhythms in oscillatory neural networks of inhibitory coupled cells. We discuss how network connectivity and intrinsic properties of the cells affect the dynamics, and how even simple circuits can exhibit a variety of mono/multi-stable rhythms including pacemakers, half-center oscillators, multiple traveling-waves, fully synchronous states, as well as various chimeras. Our analyses can help generate verifiable hypotheses for neurophysiological experiments on central pattern generators. In the last section, we demonstrate the inter-disciplinary nature of this research through the applications of these techniques to identify the universal principles governing both simple and complex dynamics, and chaotic structure in diverse nonlinear systems. Using a classical example from nonlinear laser optics, we elaborate on the multiplicity and self-similarity of key organizing structures in 2D parameter space such as homoclinic and heteroclinic bifurcation curves, Bykov T-point spirals, and inclination flips. This is followed by detailed computational reconstructions of the spatial organization and 3D embedding of bifurcation surfaces, parametric saddles, and isolated closed curves (isolas). The generality of our modeling approaches could lead to novel methodologies and nonlinear science applications in biological, medical and engineering systems

    Large-Scale Simulation of Neural Networks with Biophysically Accurate Models on Graphics Processors

    Get PDF
    Efficient simulation of large-scale mammalian brain models provides a crucial computational means for understanding complex brain functions and neuronal dynamics. However, such tasks are hindered by significant computational complexities. In this work, we attempt to address the significant computational challenge in simulating large-scale neural networks based on the most biophysically accurate Hodgkin-Huxley (HH) neuron models. Unlike simpler phenomenological spiking models, the use of HH models allows one to directly associate the observed network dynamics with the underlying biological and physiological causes, but at a significantly higher computational cost. We exploit recent commodity massively parallel graphics processors (GPUs) to alleviate the significant computational cost in HH model based neural network simulation. We develop look-up table based HH model evaluation and efficient parallel implementation strategies geared towards higher arithmetic intensity and minimum thread divergence. Furthermore, we adopt and develop advanced multi-level numerical integration techniques well suited for intricate dynamical and stability characteristics of HH models. On a commodity CPU card with 240 streaming processors, for a neural network with one million neurons and 200 million synaptic connections, the presented GPU neural network simulator is about 600X faster than a basic serial CPU based simulator, 28X faster than the CPU implementation of the proposed techniques, and only two to three times slower than the GPU based simulation using simpler spiking models

    SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence

    Full text link
    Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×11\times, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.Comment: Accepted in Science Advances (https://www.science.org/doi/10.1126/sciadv.adi1480

    Digital twin brain: a bridge between biological intelligence and artificial intelligence

    Full text link
    In recent years, advances in neuroscience and artificial intelligence have paved the way for unprecedented opportunities for understanding the complexity of the brain and its emulation by computational systems. Cutting-edge advancements in neuroscience research have revealed the intricate relationship between brain structure and function, while the success of artificial neural networks highlights the importance of network architecture. Now is the time to bring them together to better unravel how intelligence emerges from the brain's multiscale repositories. In this review, we propose the Digital Twin Brain (DTB) as a transformative platform that bridges the gap between biological and artificial intelligence. It consists of three core elements: the brain structure that is fundamental to the twinning process, bottom-layer models to generate brain functions, and its wide spectrum of applications. Crucially, brain atlases provide a vital constraint, preserving the brain's network organization within the DTB. Furthermore, we highlight open questions that invite joint efforts from interdisciplinary fields and emphasize the far-reaching implications of the DTB. The DTB can offer unprecedented insights into the emergence of intelligence and neurological disorders, which holds tremendous promise for advancing our understanding of both biological and artificial intelligence, and ultimately propelling the development of artificial general intelligence and facilitating precision mental healthcare
    corecore