28 research outputs found

    Supervised Associative Learning in Spiking Neural Network

    Get PDF
    In this paper, we propose a simple supervised associative learning approach for spiking neural networks. In an excitatory-inhibitory network paradigm with Izhikevich spiking neurons, synaptic plasticity is implemented on excitatory to excitatory synapses dependent on both spike emission rates and spike timings. As results of learning, the network is able to associate not just familiar stimuli but also novel stimuli observed through synchronised activity within the same subpopulation and between two associated subpopulations

    Cell Microscopic Segmentation with Spiking Neuron Networks

    Full text link
    International audienceSpiking Neuron Networks (SNNs) overcome the computational power of neural networks made of thresholds or sigmoidal units. Indeed, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this paper, we present how SNN can be applied with efficacy for cell microscopic image segmentation. Results obtained confirm the validity of the approach. The strategy is performed on cytological color images. Quantitative measures are used to evaluate the resulting segmentations

    Image Processing with Spiking Neuron Networks

    Full text link
    International audienceArtificial neural networks have been well developed so far. First two generations of neural networks have had a lot of successful applications. Spiking Neuron Networks (SNNs) are often referred to as the third generation of neural networks which have potential to solve problems related to biological stimuli. They derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike emission. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation.Moreover, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this chapter, we present how SNN can be applied with efficacy in image clustering, segmentation and edge detection. Results obtained confirm the validity of the approach

    Multilayer Neural Networks and Polyhedral Dichotomies

    No full text
    We study the number of hidden layers required by a multilayer neural network with threshold units to compute a dichotomy f from R d to f0; 1g, defined by a finite set of hyperplanes. We show that this question is far more intricate than computing Boolean functions, although this well-known problem is underlying our research. We present recent advances on the characterization of dichotomies, from R 2 to f0; 1g, which require two hidden layers to be exactly realized

    Computing with Spiking Neuron Networks

    No full text
    Abstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of traditional connectionist models. This chapter relates the his- tory of the “spiking neuron” in Section 1 and summarizes the most currently-in-use models of neurons and synaptic plasticity in Section 2. The computational power of SNNs is addressed in Section 3 and the problem of learning in networks of spiking neurons is tackled in Section 4, with insights into the tracks currently explored for solving it. Finally, Section 5 discusses application domains, implementation issues and proposes several simulation frameworks

    Algorithms for Structural and Dynamical Polychronous Groups Detection

    Get PDF
    Abstract. Polychronization has been proposed as a possible way to investigate the notion of cell assemblies and to understand their role as memory supports for information coding. In a spiking neuron network, polychronous groups (PGs) are small subsets of neurons that can be activated in a chain reaction according to a specific time-locked pattern. PGs can be detected in a neural network with known connection delays and visualized on a spike raster plot. In this paper, we specify the definition of PGs, making a distinction between structural and dynamical polychronous groups. We propose two algortihms to scan for structural PGs supported by a given network topology, one based on the distribution of connection delays and the other taking into account the synaptic weight values. At last, we propose a third algorithm to scan for the PGs that are actually activated in the network dynamics during a given time window.

    Computing with Spiking Neuron Networks

    No full text
    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of traditional connectionist models. This chapter relates the his- tory of the “spiking neuron” in Section 1 and summarizes the most currently-in-use models of neurons and synaptic plasticity in Section 2. The computational power of SNNs is addressed in Section 3 and the problem of learning in networks of spiking neurons is tackled in Section 4, with insights into the tracks currently explored for solving it. Finally, Section 5 discusses application domains, implementation issues and proposes several simulation frameworks

    Comparative Study of Three Connectionist Models on a Classification Problem

    No full text
    In this paper, we compare three neural learning models, on the sonar target classification problem of Gorman and Sejnowski, in two ways. First, we compare the performance on training and testing sets, in the usual sense. Second, we investigate in the database, in order to compare the examples which are hard to be learned and the patterns which are ill-classified, in generalization. The neural networks involved in these comparisons are a classical multilayered network, a wavelet network, and a evolutive architecture named "Monoplane". In spite of several differences in the characteristics of their learning algorithms, these neural networks perform similarly on the classification problem, and the patterns which are hard to classify are nearly the same, whatever the model be. Several distance measures are then computed on the database, but we show that none of them allows to explain the misclassification phenomenon. We conclude that hard to classify patterns depend on the datab..
    corecore