11 research outputs found

    State-dependent sensory processing in networks of VLSI spiking neurons

    Get PDF
    Neftci E, Chicca E, Cook M, Indiveri G, Douglas R. State-dependent sensory processing in networks of VLSI spiking neurons. Presented at the International Symposium on Circuits and Systems (ISCAS), Paris.An increasing number of research groups develop dedicated hybrid analog/digital very large scale integration (VLSI) devices implementing hundreds of spiking neurons with bio–physically realistic dynamics. However, despite the significant progress in their design, there is still little insight in translating circuitry of neural assemblies into desired (non-trivial) function. In this work, we propose to use neural circuits implementing the soft Winner–Take–All (WTA) function. By showing that recur- rently connected instances of them can have persistent activity states, which can be used as a form of working memory, we argue that such circuits can perform state–dependent computation. We demonstrate such a network in a distributed neuromorphic system consisting of two multi–neuron chips implementing soft WTA, stimulated by an event–based vision sensor. The resulting network is able to track and remember the position of a localized stimulus along a trajectory previously encoded in the system

    Contraction properties of VLSI cooperative competitive neural networks of spiking neurons

    Full text link
    A non–linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neural networks. Specifically, we apply this theory to a special class of recurrent networks, often called Cooperative Competitive Networks (CCNs), which are an abstract representation of the cooperative-competitive connectivity observed in cortex. This specific type of network is believed to play a major role in shaping cortical responses and selecting the relevant signal among distractors and noise. In this paper, we analyze contraction of combined CCNs of linear threshold units and verify the results of our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and dynamic synapses.

    Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

    Get PDF
    Neftci E, Chicca E, Indiveri G, Slotine J-J, Douglas R. Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons. Presented at the Advances in Neural Information Processing Systems 20 (NIPS), Vancouver, British Columbia, Canada.A non–linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neural networks. Specifically, we apply this theory to a special class of recurrent networks, often called Cooperative Competitive Networks (CCNs), which are an abstract representation of the cooperative-competitive connectivity observed in cortex. This specific type of network is believed to play a major role in shaping cortical responses and selecting the relevant signal among distractors and noise. In this paper, we analyze contraction of combined CCNs of linear threshold units and verify the results of our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and dynamic synapses

    A systematic method for configuring VLSI networks of spiking neurons

    Get PDF
    Neftci E, Chicca E, Indiveri G, Douglas RJ. A systematic method for configuring VLSI networks of spiking neurons. Neural Computation. 2011;23(10):2457-2497.An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e. g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems

    Function approximation with uncertainty propagation in a VLSI spiking neural network

    Get PDF
    The brain combines and integrates multiple cues to take coherent, context-dependent action using distributed, event-based computational primitives. Computational models that use these principles in software simulations of recurrently coupled spiking neural networks have been demonstrated in the past, but their implementation in hybrid analog/digital Very Large Scale Integration (VLSI) spiking neural networks remains challenging. Here, we demonstrate a distributed spiking neural network architecture comprising multiple neuromorphic VLSI chips able to reproduce these types of cue combination and integration operations. This is achieved by encoding cues as population activities of input nodes in a network of recurrently coupled VLSI Integrate-and-Fire (I&F) neurons. The value of the cue is place-encoded, while its uncertainty is represented by the width of the population activity profile. Relationships among different cues are specified through bidirectional connectivity matrices, shared between the individual input node populations and an intermediate node population. The resulting network dynamics bidirectionally relate not only the values of three variables according to a specified relation, but also their uncertainties. When cues on two populations are specified, the standard deviation of the activity in the unspecified population varies approximately linearly with the widths of the two input cues, and has less than 6% error in position compared to the value specified by the inputs. The results suggest a mechanism for recurrently relating cues such that missing information can both be recovered and assigned a level of certainty

    Real-time inference in a VLSI spiking neural network

    Get PDF
    The ongoing motor output of the brain depends on its remarkable ability to rapidly transform and fuse a variety of sensory streams in real-time. The brain processes these data using networks of neurons that communicate by asynchronous spikes, a technology that is dramatically different from conventional electronic systems. We report here a step towards constructing electronic systems with analogous performance to the brain. Our VLSI spiking neural network combines in real-time three distinct sources of input data; each is place-encoded on an individual neuronal population that expresses soft Winner-Take-All dynamics. These arrays are combined according to a user-specified function that is embedded in the reciprocal connections between the soft Winner-Take-All populations and an intermediate shared population. The overall network is able to perform function approximation (missing data can be inferred from the available streams) and cue integration (when all input streams are present they enhance one another synergistically). The network performs these tasks with about 80% and 90% reliability, respectively. Our results suggest that with further technical improvement, it may be possible to implement more complex probabilistic models such as Bayesian networks in neuromorphic electronic systems

    Synthesizing cognition in neuromorphic electronic systems

    Get PDF
    Neftci E, Binas J, Rutishauser U, Chicca E, Indiveri G, Douglas RJ. Synthesizing cognition in neuromorphic electronic systems. Proceedings of the National Academy of Sciences of the United States of America. 2013;110(37):E3468-E3476.The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina

    Artificial cognitive systems: From VLSI networks of spiking neurons to neuromorphic cognition

    Get PDF
    Neuromorphic engineering (NE) is an emerging research field that has been attempting to identify neural types of computational principles, by implementing biophysically realistic models of neural systems in Very Large Scale Integration (VLSI) technology. Remarkable progress has been made recently, and complex artificial neural sensory-motor systems can be built using this technology. Today, however, NE stands before a large conceptual challenge that must be met before there will be significant progress toward an age of genuinely intelligent neuromorphic machines. The challenge is to bridge the gap from reactive systems to ones that are cognitive in quality. In this paper, we describe recent advancements in NE, and present examples of neuromorphic circuits that can be used as tools to address this challenge. Specifically, we show how VLSI networks of spiking neurons with spike-based plasticity mechanisms and soft winner-take-all architectures represent important building blocks useful for implementing artificial neural systems able to exhibit basic cognitive abilities
    corecore