483 research outputs found

    Reliable Computation in Noisy Backgrounds Using Real-Time Neuromorphic Hardware

    Get PDF
    Wang H-P, Chicca E, Indiveri G, Sejnowski TJ. Reliable Computation in Noisy Backgrounds Using Real-Time Neuromorphic Hardware. Presented at the Biomedical Circuits and Systems Conference (BIOCAS), Montreal, Que.Spike-time based coding of neural information, in contrast to rate coding, requires that neurons reliably and precisely fire spikes in response to repeated identical inputs, despite a high degree of noise from stochastic synaptic firing and extraneous background inputs. We investigated the degree of reliability and precision achievable in various noisy background conditions using real-time neuromorphic VLSI hardware which models integrate-and-fire spiking neurons and dynamic synapses. To do so, we varied two properties of the inputs to a single neuron, synaptic weight and synchrony magnitude (number of synchronously firing pre-synaptic neurons). Thanks to the realtime response properties of the VLSI system we could carry out extensive exploration of the parameter space, and measure the neurons firing rate and reliability in real-time. Reliability of output spiking was primarily influenced by the amount of synchronicity of synaptic input, rather than the synaptic weight of those synapses. These results highlight possible regimes in which real-time neuromorphic systems might be better able to reliably compute with spikes despite noisy input

    Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks

    Full text link
    Traveling waves are a fundamental phenomenon in the brain, playing a crucial role in short-term information storage. In this study, we leverage the concept of traveling wave dynamics within a neural lattice to formulate a theoretical model of neural working memory, study its properties, and its real world implications in AI. The proposed model diverges from traditional approaches, which assume information storage in static, register-like locations updated by interference. Instead, the model stores data as waves that is updated by the wave's boundary conditions. We rigorously examine the model's capabilities in representing and learning state histories, which are vital for learning history-dependent dynamical systems. The findings reveal that the model reliably stores external information and enhances the learning process by addressing the diminishing gradient problem. To understand the model's real-world applicability, we explore two cases: linear boundary condition (LBC) and non-linear, self-attention-driven boundary condition (SBC). The model with the linear boundary condition results in a shift matrix plus low-rank matrix currently used in H3 state space RNN. Further, our experiments with LBC reveal that this matrix is effectively learned by Recurrent Neural Networks (RNNs) through backpropagation when modeling history-dependent dynamical systems. Conversely, the SBC parallels the autoregressive loop of an attention-only transformer with the context vector representing the wave substrate. Collectively, our findings suggest the broader relevance of traveling waves in AI and its potential in advancing neural network architectures

    Energy-based General Sequential Episodic Memory Networks at the Adiabatic Limit

    Full text link
    The General Associative Memory Model (GAMM) has a constant state-dependant energy surface that leads the output dynamics to fixed points, retrieving single memories from a collection of memories that can be asynchronously preloaded. We introduce a new class of General Sequential Episodic Memory Models (GSEMM) that, in the adiabatic limit, exhibit temporally changing energy surface, leading to a series of meta-stable states that are sequential episodic memories. The dynamic energy surface is enabled by newly introduced asymmetric synapses with signal propagation delays in the network's hidden layer. We study the theoretical and empirical properties of two memory models from the GSEMM class, differing in their activation functions. LISEM has non-linearities in the feature layer, whereas DSEM has non-linearity in the hidden layer. In principle, DSEM has a storage capacity that grows exponentially with the number of neurons in the network. We introduce a learning rule for the synapses based on the energy minimization principle and show it can learn single memories and their sequential relationships online. This rule is similar to the Hebbian learning algorithm and Spike-Timing Dependent Plasticity (STDP), which describe conditions under which synapses between neurons change strength. Thus, GSEMM combines the static and dynamic properties of episodic memory under a single theoretical framework and bridges neuroscience, machine learning, and artificial intelligence

    The effect of neural adaptation of population coding accuracy

    Full text link
    Most neurons in the primary visual cortex initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. The functional consequences of adaptation are unclear. Typically a reduction of firing rate would reduce single neuron accuracy as less spikes are available for decoding, but it has been suggested that on the population level, adaptation increases coding accuracy. This question requires careful analysis as adaptation not only changes the firing rates of neurons, but also the neural variability and correlations between neurons, which affect coding accuracy as well. We calculate the coding accuracy using a computational model that implements two forms of adaptation: spike frequency adaptation and synaptic adaptation in the form of short-term synaptic plasticity. We find that the net effect of adaptation is subtle and heterogeneous. Depending on adaptation mechanism and test stimulus, adaptation can either increase or decrease coding accuracy. We discuss the neurophysiological and psychophysical implications of the findings and relate it to published experimental data.Comment: 35 pages, 8 figure

    Rapid Temporal Modulation of Synchrony by Competition in Cortical Interneuron Networks

    Get PDF
    The synchrony of neurons in extrastriate visual cortex is modulated by selective attention even when there are only small changes in firing rate (Fries, Reynolds, Rorie, & Desimone, 2001). We used Hodgkin-Huxley type models of cortical neurons to investigate the mechanism by which the degree of synchrony can be modulated independently of changes in firing rates

    A learning algorithm for boltzmann machines

    Get PDF

    The Computational Brain.

    Get PDF
    Keywords: reductionism, neural networks, distributed coding, Karl Pribram, computational neuroscience, receptive field 1.1 The broad goal of this book, expressed at the start, is ``to understand how neurons give rise to a mental life.'' A mental reductionism is assumed in this seductively simple formulation. Indeed, the book represents reductionism at its best, as the authors guide the reader through the many intermediate levels that link neurons with mental life. In so doing they attack a problem that has persisted for some decades in the neurosciences, since the development of single-cell recording methods. The problem is that millions of neurons participate in every behaviorally meaningful activity, but we normally record from only one neuron at a time, or at best a handful. The temptation is great to overestimate the one-millionth sample obtained from a single neuron, to interpret its activity as detecting a perceptual situation or driving a motor response. This approach, seemingly inescapable in the 1960s, became untenable, but there were no concrete alternatives. Evoked potential techniques gave only a gross average of activity, too vague to pin down mechanisms, and early PDP (parallel distributed processing, or artificial neural network) models were too biologically unrealistic to provide viable interpretations of the single-cell data. Churchland and Sejnowski show how distributed models can now attack this problem, providing significant insights into brain function in a number of domains. 1.2 The book has several parts. First, the authors introduce their approach, combining anatomical, physiological, behavioral and modelling methods in an integrated interdisciplinary attack on specific functional systems. There follows a review of enough anatomy and neurophysiology to make the authors' viewpoint clear and to provide a background for integrating PDP modelling into specific problems in the neurosciences. The heart of the book is a series of chapters reviewing particular models that have been successful in increasing our understanding of the functioning of biological brains. Models of reflex reactions in invertebrates, of locomotion, the vestibulo-ocular reflex in primates

    Geometry unites synchrony, chimeras, and waves in nonlinear oscillator networks

    Get PDF
    One of the simplest mathematical models in the study of nonlinear systems is the Kuramoto model, which describes synchronization in systems from swarms of insects to superconductors. We have recently found a connection between the original, real-valued nonlinear Kuramoto model and a corresponding complex-valued system that permits describing the system in terms of a linear operator and iterative update rule. We now use this description to investigate three major synchronization phenomena in Kuramoto networks (phase synchronization, chimera states, and traveling waves), not only in terms of steady state solutions but also in terms of transient dynamics and individual simulations. These results provide new mathematical insight into how sophisticated behaviors arise from connection patterns in nonlinear networked systems
    corecore