9 research outputs found

    Rapid, parallel path planning by propagating wavefronts of spiking neural activity

    Get PDF
    Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware

    F.: Comparison of supervised learning methods for spike time coding in spiking neural networks

    No full text
    In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods? In order to answer this question, we discuss various approaches to the learning task considered. We shortly describe the particular learning algorithms and report the results of experiments. Finally, we discuss the properties, assumptions and limitations of each method. We complete this review with a comprehensive list of pointers to the literature

    Comparison Of Supervised Learning Methods For Spike Time Coding in . . .

    No full text
    In this review we focus our attention on the supervised learning methods for spike time coding in Spiking Neural Networks (SNN). This study is motivated by the recent experimental results on information coding in the biological neural systems which suggest that precise timing of individual spikes may be essential for e#cient computation in the brain. We pose

    Generalization Properties of Spiking Neurons Trained with ReSuMe Method

    No full text
    Abstract. In this paper we demonstrate the generalization property of spiking neurons trained with ReSuMe method. We show in a set of experiments that the learning neuron can approximate the input-output transformations defined by another- reference neuron with a high precision and that the learning process converges very quickly. We discuss the relationship between the neuron I/O properties and the weight distribution of its input connections. Finally, we discuss the conditions under which the neuron can approximate some given I/O transformations.

    ReSuMe - New Supervised Learning Method

    No full text
    In this report I introduce ReSuMe - a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learning method for control of movement for the physically disabled. However, thorough analysis of the ReSuMe method reveals its suitability not only to the task of movement control, but also to other real-life applications including modeling, identification and control of diverse non-stationary, nonlinear objects

    metodą ReSuMe

    No full text
    Poznań, 2006 Supervised learning in Spiking Neural Networks (SNN) is considered in this dis-sertation. Spiking networks represent a special class of artificial neural networks, in which neuron models communicate by sending spikes (action potentials). SNN are investigated here in the context of their potential applications to movement control in neuroprostheses, i.e. in the systems that aim to substitute or restore locomotory functions in human subjects with the movement disorders. From the control point of view, human limb is a multidimensional, nonli-near, nonstationary object, characterized by the rich embedded dynamics. This is the reason for the low effectiveness of the traditional control approaches in neuroprosthesis. One of the new concepts to solve this problem is to mimic functions of the Central Nervous System by the SNN-based controllers. How-ever, the analysis of the recent supervised learning methods for SNN revealed that the existing algorithms are not suitable for the task at hand. Investiga

    Spiking Neural Computing in Memristive Neuromorphic Platforms

    Get PDF
    International audienceAbstract Neuromorphic computation using Spiking Neural Networks (SNN) is pro-posed as an alternative solution for future of computation to conquer the memorybottelneck issue in recent computer architecture. Different spike codings have beendiscussed to improve data transferring and data processing in neuro-inspired compu-tation paradigms. Choosing the appropriate neural network topology could result inbetter performance of computation, recognition and classification. The model of theneuron is another important factor to design and implement SNN systems. The speedof simulation and implementation, ability of integration to the other elements of thenetwork, and suitability for scalable networks are the factors to select a neuron model.The learning algorithms are significant consideration to train the neural network forweight modification. Improving learning in neuromorphic architecture is feasibleby improving the quality of artificial synapse as well as learning algorithm such asSTDP. In this chapter we proposed a new synapse box that can remember and forget.Furthermore, as the most frequent used unsupervised method for network training inSNN is STDP, we analyze and review the various methods of STDP. The sequentialorder of pre- or postsynaptic spikes occurring across a synapse in an interval of timeleads to defining different STDP methods. Based on the importance of stability aswell as Hebbian competition or anti-Hebbian competition the method will be usedin weight modification. We survey the most significant projects that cause makingneuromorphic platform. The advantages and disadvantages of each neuromorphicplatform are introduced in this chapter
    corecore