17 research outputs found

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Networks of spiking neurons and plastic synapses: implementation and control

    Get PDF
    The brain is an incredible system with a computational power that goes further beyond those of our standard computer. It consists of a network of 1011 neurons connected by about 1014 synapses: a massive parallel architecture that suggests that brain performs computation according to completely new strategies which we are far from understanding. To study the nervous system a reasonable starting point is to model its basic units, neurons and synapses, extract the key features, and try to put them together in simple controllable networks. The research group I have been working in focuses its attention on the network dynamics and chooses to model neurons and synapses at a functional level: in this work I consider network of integrate-and-fire neurons connected through synapses that are plastic and bistable. A synapses is said to be plastic when, according to some kind of internal dynamics, it is able to change the “strength”, the efficacy, of the connection between the pre- and post-synaptic neuron. The adjective bistable refers to the number of stable states of efficacy that a synapse can have; we consider synapses with two stable states: potentiated (high efficacy) or depressed (low efficacy). The considered synaptic model is also endowed with a new stop-learning mechanism particularly relevant when dealing with highly correlated patterns. The ability of this kind of systems of reproducing in simulation behaviors observed in biological networks, give sense to an attempt of implementing in hardware the studied network. This thesis situates at this point: the goal of this work is to design, control and test hybrid analog-digital, biologically inspired, hardware systems that behave in agreement with the theoretical and simulations predictions. This class of devices typically goes under the name of neuromorphic VLSI (Very-Large-Scale Integration). Neuromorphic engineering was born from the idea of designing bio-mimetic devices and represents a useful research strategy that contributes to inspire new models, stimulates the theoretical research and that proposes an effective way of implementing stand-alone power-efficient devices. In this work I present two chips, a prototype and a larger device, that are a step towards endowing VLSI, neuromorphic systems with autonomous learning capabilities adequate for not too simple statistics of the stimuli to be learnt. The main novel features of these chips are the implemented type of synaptic plasticity and the configurability of the synaptic connectivity. The reported experimental results demonstrate that the circuits behave in agreement with theoretical predictions and the advantages of the stop-learning synaptic plasticity when highly correlated patterns have to be learnt. The high degree of flexibility of these chips in the definition of the synaptic connectivity is relevant in the perspective of using such devices as building blocks of parallel, distributed multi-chip architectures that will allow to scale up the network dimensions to systems with interesting computational abilities capable to interact with real-world stimuli

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    corecore