14 research outputs found

    Author index volume 261 (2001)

    Get PDF

    Back-engineering of spiking neural networks parameters

    Get PDF
    We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to "back-engineer" a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to "program" a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.Comment: 30 pages, 17 figures, submitte

    Using event-based metric for event-based neural network weight adjustment

    Get PDF
    International audienceThe problem of adjusting the parameters of an event-based network model is addressed here at the programmatic level. Considering temporal processing, the goal is to adjust the network units weights so that the outcoming events correspond to what is desired. The present work proposes, in the deterministic and discrete case, a way to adapt usual alignment metrics in order to derive suitable adjustment rules. At the numerical level, the stability and unbiasness of the method is verified.Le problème de l'ajustement des paramètres d'un modèle de réseau basée sur des événements est abordé ici au niveau de la programmation. Compte tenu de traitement temporel, l'objectif est d'ajuster les unités de réseau pondérations afin que les événements sortantes correspondent à ce qui est souhaité. Le présent travail propose, dans le cas déterministe et discret, un moyen d'adapter mesures habituelles d'alignement afin d'en tirer des règles d'ajustement appropriées. Au niveau numérique, la stabilité et unbiasness de la méthode est vérifiée

    An investigation into adaptive power reduction techniques for neural hardware

    No full text
    In light of the growing applicability of Artificial Neural Network (ANN) in the signal processing field [1] and the present thrust of the semiconductor industry towards lowpower SOCs for mobile devices [2], the power consumption of ANN hardware has become a very important implementation issue. Adaptability is a powerful and useful feature of neural networks. All current approaches for low-power ANN hardware techniques are ‘non-adaptive’ with respect to the power consumption of the network (i.e. power-reduction is not an objective of the adaptation/learning process). In the research work presented in this thesis, investigations on possible adaptive power reduction techniques have been carried out, which attempt to exploit the adaptability of neural networks in order to reduce the power consumption. Three separate approaches for such adaptive power reduction are proposed: adaptation of size, adaptation of network weights and adaptation of calculation precision. Initial case studies exhibit promising results with significantpower reduction

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided
    corecore