33 research outputs found

    Low-cost hardware implementations for discrete-time spiking neural networks

    Get PDF
    In this paper, both GPU (Graphing Processing Unit) based and FPGA (Field Programmable Gate Array) based hardware implementations for a discrete-time spiking neuron model are presented. This generalized model is highly adapted for large scale neural network implementations, since its dynamics are entirely represented by a spike train (binary code). This means that at microscopic scale the membrane potentials have a one-to-one correspondence with the spike train, in the asymptotic dynamics. This model also permit us to reproduce complex spiking dynamics such as those obtained with general Integrate-and-Fire (gIF) models. The FPGA design has been coded in Handel-C and VHDL and has been based on a fixed-point reconfigurable architecture, while the GPU spiking neuron kernel has been coded using C++ and CUDA. Numerical verifications are provided

    Parametric estimation of spike train statistics by Gibbs distributions : an application to bio-inspired and experimental data

    Get PDF
    We review here the basics of the formalism of Gibbs distributions and its numerical implementation, (its details published elsewhere \cite{vasquez-cessac-etal:10}, in order to characterizing the statistics of multi-unit spike trains. We present this here with the aim to analyze and modeling synthetic data, especially bio-inspired simulated data e.g. from Virtual Retina \cite{wohrer-kornprobst:09}, but also experimental data Multi-Electrode-Array(MEA) recordings from retina obtained by Adrian Palacios. We remark that Gibbs distribution allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

    Get PDF
    In robotics, there have been proposed methods for locomotion of nonwheeled robots based on artificial neural networks; those built with plausible neurons are called spiking central pattern generators (SCPGs). In this chapter, we present a generalization of reported deterministic and stochastic reverse engineering methods for automatically designing SCPG for legged robots locomotion systems; such methods create a spiking neural network capable of endogenously and periodically replicating one or several rhythmic signal sets, when a spiking neuron model and one or more locomotion gaits are given as inputs. Designed SCPGs have been implemented in different robotic controllers for a variety of robotic platforms. Finally, some aspects to improve and/or complement these SCPG-based locomotion systems are pointed out

    Low-cost hardware implementations for discrete-time spiking neural networks

    Get PDF
    In this paper, both GPU (Graphing Processing Unit) based and FPGA (Field Programmable Gate Array) based hardware implementations for a discrete-time spiking neuron model are presented. This generalized model is highly adapted for large scale neural network implementations, since its dynamics are entirely represented by a spike train (binary code). This means that at microscopic scale the membrane potentials have a one-to-one correspondence with the spike train, in the asymptotic dynamics. This model also permit us to reproduce complex spiking dynamics such as those obtained with general Integrate-and-Fire (gIF) models. The FPGA design has been coded in Handel-C and VHDL and has been based on a fixed-point reconfigurable architecture, while the GPU spiking neuron kernel has been coded using C++ and CUDA. Numerical verifications are provided

    A Methodology for Classifying Search Operators as Intensification or Diversification Heuristics

    Get PDF
    Selection hyper-heuristics are generic search tools that dynamically choose, from a given pool, the most promising operator (low-level heuristic) to apply at each iteration of the search process. The performance of these methods depends on the quality of the heuristic pool. Two types of heuristics can be part of the pool: diversification heuristics, which help to escape from local optima, and intensification heuristics, which effectively exploit promising regions in the vicinity of good solutions. An effective search strategy needs a balance between these two strategies. However, it is not straightforward to categorize an operator as intensification or diversification heuristic on complex domains. Therefore, we propose an automated methodology to do this classification. This brings methodological rigor to the configuration of an iterated local search hyper-heuristic featuring diversification and intensification stages. The methodology considers the empirical ranking of the heuristics based on an estimation of their capacity to either diversify or intensify the search. We incorporate the proposed approach into a state-of-the-art hyper-heuristic solving two domains: course timetabling and vehicle routing. Our results indicate improved performance, including new best-known solutions for the course timetabling problem

    Parametric estimation of spike train statistics by Gibbs distributions : an application to bio-inspired and experimental data

    Get PDF
    We review here the basics of the formalism of Gibbs distributions and its numerical implementation, (its details published elsewhere \cite{vasquez-cessac-etal:10}, in order to characterizing the statistics of multi-unit spike trains. We present this here with the aim to analyze and modeling synthetic data, especially bio-inspired simulated data e.g. from Virtual Retina \cite{wohrer-kornprobst:09}, but also experimental data Multi-Electrode-Array(MEA) recordings from retina obtained by Adrian Palacios. We remark that Gibbs distribution allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code

    Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    Get PDF
    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications

    Computing with spikes, architecture, properties and implementation of emerging paradigms

    Get PDF
    In this thesis we study at a concrete practical level how computation with action potentials (spikes) can be performed. We address the problem of pro- gramming a dynamical system modeled as a neural network and considering both, hardware and software implementations. For this, we use a discrete- time spiking neuron model, which has been introduced in Soula et al. (2006), and called BMS in the sequel, whose dynamics is rather rich (see section 1.2.4). On one hand, we propose an efficient method to properly estimate the parameters (delayed synaptic weights) of a neural network from the observa- tion of its spiking dynamics. The idea is to avoid the underlying NP-complete problem (when both weights and inter-neural transmission delays are con- sidered in the parameters estimation). So far, our method defines a Linear Programming (LP) system to perform the parameters estimation. Another aspect considered in this part of the work is the fact that we include a reser- voir computing mechanism (hidden network), which permits us, as we show, to increase the computational power and to add robustness in the system. Furthermore these ideas are applied to implement input-output transforma- tions, with a method learning the implicit parameters of the corresponding transfer function. On the other hand we have worked on the development of numerical implementations permitting us to validate our algorithms. We also made contributions to code methods for spike trains statistics analysis and simu- lations of spiking neural networks. Thus, we co-develop a C++ library, called EnaS, which is distributed under the CeCILL-C free license. This library is also compatible with other simulators and could be used as a plugin. Finally we consider the emergent field of bio-inspired hardware im- plementations, where FPGA (Field Programmable Gate Array) and GPU (Graphic Processing Unit) technologies are studied. In this sense, we evalu- ate the hardware implementations of the proposed neuron models (gIF-type neuron models) under periodic and chaotic activity regimes. The FPGA- based implementation has been achieved using a precision analysis and its performance compared with that based on GPU
    corecore