6 research outputs found

    Implementing a multi-model estimation method

    Get PDF
    This work is realized within the scope of a general attempt to understand parametric adaptation, regarding visual perception. The key idea is to analyze how we may use multi-model parametric estimation as a 1st step towards categorization. More generally, the goal is to formalize how the notion of ``objects'' or ``events'' in an application may be reduced to a choice in a hierarchy of parametric models used to estimate the underlying data categorization. These mechanisms are to be linked with what occurs in the cerebral cortex where object recognition corresponds to a parametric neuronal estimation (see for instanced Page 2000 for a discussion and Freedman et al 2001 for an example regarding the primate visual cortex). We thus hope to bring here an algorithmic element in relation with the ``grand-ma'' neuron modelization. We thus revisit the problem of parameter estimation in computer vision, presented here as a simple optimization problem, considering (i) non-linear implicit measurement equations and parameter constraints, plus (ii) robust estimation in the presence of outliers and (iii) multi-model comparisons. Here, (1) a projection algorithm based on generalizations of square-root decompositions allows an efficient and numerically stable local resolution of a set of non-linear equations. On the other hand, (2) a robust estimation module of a hierarchy of non-linear models has been designed and validated. A step ahead, the software architecture of the estimation module is discussed with the goal of being integrated in reactive software environments or within applications with time constraints

    Implementing a multi-model estimation method

    Get PDF
    This work is realized within the scope of a general attempt to understand parametric adaptation, regarding visual perception. The key idea is to analyze how we may use multi-model parametric estimation as a 1st step towards categorization. More generally, the goal is to formalize how the notion of ``objects'' or ``events'' in an application may be reduced to a choice in a hierarchy of parametric models used to estimate the underlying data categorization. These mechanisms are to be linked with what occurs in the cerebral cortex where object recognition corresponds to a parametric neuronal estimation (see for instanced Page 2000 for a discussion and Freedman et al 2001 for an example regarding the primate visual cortex). We thus hope to bring here an algorithmic element in relation with the ``grand-ma'' neuron modelization. We thus revisit the problem of parameter estimation in computer vision, presented here as a simple optimization problem, considering (i) non-linear implicit measurement equations and parameter constraints, plus (ii) robust estimation in the presence of outliers and (iii) multi-model comparisons. Here, (1) a projection algorithm based on generalizations of square-root decompositions allows an efficient and numerically stable local resolution of a set of non-linear equations. On the other hand, (2) a robust estimation module of a hierarchy of non-linear models has been designed and validated. A step ahead, the software architecture of the estimation module is discussed with the goal of being integrated in reactive software environments or within applications with time constraints

    Back-engineering of spiking neural networks parameters

    Get PDF
    We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to "back-engineer" a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to "program" a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.Comment: 30 pages, 17 figures, submitte

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided
    corecore