228 research outputs found

    A Conformal Fractional Derivative-based Leaky Integrate-and-Fire Neuron Model

    Get PDF
    Neuron model have been extensively studied and different models have been proposed. Nobel laureate Hodgkin-Huxley model is physiologically relevant and can demonstrate different neural behaviors, but it is mathematically complex. For this reason, simplified neuron models such as integrate-and-fire model and its derivatives are more popular in the literature to study neural populations. Lapicque’s integrate-and-fire model is proposed in 1907 and its leaky integrate-and-fire version is very popular due to its simplicity. In order to improve this simple model and capture different aspects of neurons, a variety of it have been proposed. Fractional order derivative-based neuron models are one of those varieties, which can show adaptation without necessitating additional differential equations. However, fractional-order derivatives could be computationally costly. Recently, a conformal fractional derivative (CFD) is suggested in literature. It is easy to understand and implement compared to the other methods. In this study, a CFD-based leaky integrate-and-fire neuron model is proposed. The model captures the adaptation in firing rate under sustained current injection. Results suggest that it could be used to easily and efficiently implement network models as well as to model different sensory afferents

    Hardware design of LIF with Latency neuron model with memristive STDP synapses

    Full text link
    In this paper, the hardware implementation of a neuromorphic system is presented. This system is composed of a Leaky Integrate-and-Fire with Latency (LIFL) neuron and a Spike-Timing Dependent Plasticity (STDP) synapse. LIFL neuron model allows to encode more information than the common Integrate-and-Fire models, typically considered for neuromorphic implementations. In our system LIFL neuron is implemented using CMOS circuits while memristor is used for the implementation of the STDP synapse. A description of the entire circuit is provided. Finally, the capabilities of the proposed architecture have been evaluated by simulating a motif composed of three neurons and two synapses. The simulation results confirm the validity of the proposed system and its suitability for the design of more complex spiking neural network

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Large-Scale simulations of plastic neural networks on neuromorphic hardware

    Get PDF
    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    Statistics of spike trains in conductance-based neural networks: Rigorous results

    Get PDF
    We consider a conductance based neural network inspired by the generalized Integrate and Fire model introduced by Rudolph and Destexhe. We show the existence and uniqueness of a unique Gibbs distribution characterizing spike train statistics. The corresponding Gibbs potential is explicitly computed. These results hold in presence of a time-dependent stimulus and apply therefore to non-stationary dynamics.Comment: 42 pages, 1 figure, to appear in Journal of Mathematical Neuroscienc

    Time Fractional Cable Equation And Applications in Neurophysiology

    Full text link
    We propose an extension of the cable equation by introducing a Caputo time fractional derivative. The fundamental solutions of the most common boundary problems are derived analitically via Laplace Transform, and result be written in terms of known special functions. This generalization could be useful to describe anomalous diffusion phenomena with leakage as signal conduction in spiny dendrites. The presented solutions are computed in Matlab and plotted.Comment: 10 figures. arXiv admin note: substantial text overlap with arXiv:1702.0532

    Beyond LIF neurons on neuromorphic hardware

    Get PDF
    Neuromorphic systems aim to provide accelerated low-power simulation of Spiking Neural Networks (SNNs), typically featuring simple and efficient neuron models such as the Leaky Integrate-and-Fire (LIF) model. Biologically plausible neuron models developed by neuroscientists are largely ignored in neuromorphic computing due to their increased computational costs. This work bridges this gap through implementation and evaluation of a single compartment Hodgkin-Huxley (HH) neuron and a multi-compartment neuron incorporating dendritic computation on the SpiNNaker, and SpiNNaker2 prototype neuromorphic systems. Numerical accuracy of the model implementations is benchmarked against reference models in the NEURON simulation environment, with excellent agreement achieved by both the fixed- and floating-point SpiNNaker implementations. The computational cost is evaluated in terms of timing measurements profiling neural state updates. While the additional model complexity understandably increases computation times relative to LIF models, it was found a wallclock time increase of only 8× was observed for the HH neuron (11× for the mutlicompartment model), demonstrating the potential of hardware accelerators in the next-generation neuromorphic system to optimize implementation of complex neuron models. The benefits of models directly corresponding to biophysiological data are demonstrated: HH neurons are able to express a range of output behaviors not captured by LIF neurons; and the dendritic compartment provides the first implementation of a spiking multi-compartment neuron model with XOR-solving capabilities on neuromorphic hardware. The work paves the way for inclusion of more biologically representative neuron models in neuromorphic systems, and showcases the benefits of hardware accelerators included in the next-generation SpiNNaker2 architecture
    corecore