16 research outputs found

    Event-driven simulation of spiking neurons with stochastic dynamics

    Get PDF
    We present a new technique, based on a proposed event-based strategy (Mattia & Del Giudice, 2000), for efficiently simulating large networks of simple model neurons. The strategy was based on the fact that interactions among neurons occur by means of events that are well localized in time (the action potentials) and relatively rare. In the interval between two of these events, the state variables associated with a model neuron or a synapse evolved deterministically and in a predictable way. Here, we extend the event-driven simulation strategy to the case in which the dynamics of the state variables in the inter-event intervals are stochastic. This extension captures both the situation in which the simulated neurons are inherently noisy and the case in which they are embedded in a very large network and receive a huge number of random synaptic inputs. We show how to effectively include the impact of large background populations into neuronal dynamics by means of the numerical evaluation of the statistical properties of single-model neurons under random current injection. The new simulation strategy allows the study of networks of interacting neurons with an arbitrary number of external afferents and inherent stochastic dynamics

    Model-driven analysis of eyeblink classical conditioning reveals the underlying structure of cerebellar plasticity and neuronal activity

    Get PDF
    The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors

    GPU-based implementation of real-time system for spiking neural networks

    Get PDF
    Real-time simulations of biological neural networks (BNNs) provide a natural platform for applications in a variety of fields: data classification and pattern recognition, prediction and estimation, signal processing, control and robotics, prosthetics, neurological and neuroscientific modeling. BNNs possess inherently parallel architecture and operate in continuous signal domain. Spiking neural networks (SNNs) are type of BNNs with reduced signal dynamic range: communication between neurons occurs by means of time-stamped events (spikes). SNNs allow reduction of algorithmic complexity and communication data size at a price of little loss in accuracy. Simulation of SNNs using traditional sequential computer architectures results in significant time penalty. This penalty prohibits application of SNNs in real-time systems. Graphical processing units (GPUs) are cost effective devices specifically designed to exploit parallel shared memory-based floating point operations applied not only to computer graphics, but also to scientific computations. This makes them an attractive solution for SNN simulation compared to that of FPGA, ASIC and cluster message passing computing systems. Successful implementations of GPU-based SNN simulations have been already reported. The contribution of this thesis is the development of a scalable GPU-based realtime system that provides initial framework for design and application of SNNs in various domains. The system delivers an interface that establishes communication with neurons in the network as well as visualizes the outcome produced by the network. Accuracy of the simulation is emphasized due to its importance in the systems that exploit spike time dependent plasticity, classical conditioning and learning. As a result, a small network of 3840 Izhikevich neurons implemented as a hybrid system with Parker-Sochacki numerical integration method achieves real time operation on GTX260 device. An application case study of the system modeling receptor layer of retina is reviewed

    Real-Time Computing Platform for Spiking Neurons (RT-Spike)

    No full text

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented
    corecore