148 research outputs found

    Are probabilistic spiking neural networks suitable for reservoir computing?

    Get PDF
    This study employs networks of stochastic spiking neurons as reservoirs for liquid state machines (LSM). We experimentally investigate the separation property of these reservoirs and show their ability to generalize classes of input signals. Similar to traditional LSM, probabilistic LSM (pLSM) have the separation property enabling them to distinguish between different classes of input stimuli. Furthermore, our results indicate some potential advantages of non-deterministic LSM by improving upon the separation ability of the liquid. Three non-deterministic neural models are considered and for each of them several parameter configurations are explored. We demonstrate some of the characteristics of pLSM and compare them to their deterministic counterparts. pLSM offer more flexibility due to the probabilistic parameters resulting in a better performance for some values of these parameters

    Improved Spike-Timed Mappings using a Tri-Phasic Spike Timing-Dependent Plasticity Rule

    Get PDF
    Reservoir computing and the liquid state machine models have received much attention in the literature in recent years. In this paper we investigate using a reservoir composed of a network of spiking neurons, with synaptic delays, whose synapses are allowed to evolve using a tri-phasic spike timing- dependent plasticity (STDP) rule. The networks are trained to produce specific spike trains in response to spatio-temporal input patterns. The results of using a tri-phasic STDP rule on the network properties are compared to those found using the more common exponential form of the rule. It is found that each rule causes the synaptic weights to evolve in significantly different fashions giving rise to different network dynamics. It is also found that the networks evolved with the tri-phasic rule are more capable of mapping input spatio-temporal patterns to the output spike trains

    Artificial Neurogenesis: An Introduction and Selective Review

    Get PDF
    International audienceIn this introduction and review—like in the book which follows—we explore the hypothesis that adaptive growth is a means of producing brain-like machines. The emulation of neural development can incorporate desirable characteristics of natural neural systems into engineered designs. The introduction begins with a review of neural development and neural models. Next, artificial development— the use of a developmentally-inspired stage in engineering design—is introduced. Several strategies for performing this " meta-design " for artificial neural systems are reviewed. This work is divided into three main categories: bio-inspired representations ; developmental systems; and epigenetic simulations. Several specific network biases and their benefits to neural network design are identified in these contexts. In particular, several recent studies show a strong synergy, sometimes interchange-ability, between developmental and epigenetic processes—a topic that has remained largely under-explored in the literature

    Dense Hebbian neural networks: a replica symmetric picture of supervised learning

    Get PDF
    We consider dense, associative neural-networks trained by a teacher (i.e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters such as quality and quantity of the training dataset, network storage and noise, that is valid in the limit of large network size and structureless datasets: these networks may work in a ultra-storage regime (where they can handle a huge amount of patterns, if compared with shallow neural networks) or in a ultra-detection regime (where they can perform pattern recognition at prohibitive signal-to-noise ratios, if compared with shallow neural networks). Guided by the random theory as a reference framework, we also test numerically learning, storing and retrieval capabilities shown by these networks on structured datasets as MNist and Fashion MNist. As technical remarks, from the analytic side, we implement large deviations and stability analysis within Guerra's interpolation to tackle the not-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensors, overall obtaining a novel and broad approach to investigate supervised learning in neural networks, beyond the shallow limit, in general.Comment: arXiv admin note: text overlap with arXiv:2211.1406

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
    • …
    corecore