2 research outputs found

    Computing with noise in spiking neural networks

    Get PDF
    Trial-to-trial variability is an ubiquitous characteristic in neural firing patterns and is often regarded as a side-effect of intrinsic noise. Increasing evidence indicates that this variability is a signature of network computation. The computational role of noise is not yet clear and existing frameworks use abstract models for stochastic computation. In this work, we use networks of spiking neurons to perform stochastic inference by sam- pling. We provide a novel analytical description of the neural response function with an unprecedented range of validity. This description enables an implementation of spiking networks in simulations to sample from Boltzmann distributions. We show the robust- ness of these networks to parameter variations and highlight the substantial advantages of short-term plasticity in our framework. We demonstrate accelerated inference on neu- romorphic hardware with a speed-up of 10^4 compared to biological networks, regardless of network size. We further explore the role of noise as a computational component in our sampling networks and identify the functional equivalence between synaptic connec- tions and mutually shared noise. Based on this, we implement interconnected sampling ensembles which exploit their activity as noise resource to maintain a stochastic firing regime

    Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates

    Get PDF
    Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”
    corecore