225 research outputs found

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Artificial Cognitive Systems: From VLSI Networks of Spiking Neurons to Neuromorphic Cognition

    Get PDF
    Neuromorphic engineering (NE) is an emerging research field that has been attempting to identify neural types of computational principles, by implementing biophysically realistic models of neural systems in Very Large Scale Integration (VLSI) technology. Remarkable progress has been made recently, and complex artificial neural sensory-motor systems can be built using this technology. Today, however, NE stands before a large conceptual challenge that must be met before there will be significant progress toward an age of genuinely intelligent neuromorphic machines. The challenge is to bridge the gap from reactive systems to ones that are cognitive in quality. In this paper, we describe recent advancements in NE, and present examples of neuromorphic circuits that can be used as tools to address this challenge. Specifically, we show how VLSI networks of spiking neurons with spike-based plasticity mechanisms and soft winner-take-all architectures represent important building blocks useful for implementing artificial neural systems able to exhibit basic cognitive abilitie

    The importance of space and time in neuromorphic cognitive agents

    Full text link
    Artificial neural networks and computational neuroscience models have made tremendous progress, allowing computers to achieve impressive results in artificial intelligence (AI) applications, such as image recognition, natural language processing, or autonomous driving. Despite this remarkable progress, biological neural systems consume orders of magnitude less energy than today's artificial neural networks and are much more agile and adaptive. This efficiency and adaptivity gap is partially explained by the computing substrate of biological neural processing systems that is fundamentally different from the way today's computers are built. Biological systems use in-memory computing elements operating in a massively parallel way rather than time-multiplexed computing units that are reused in a sequential fashion. Moreover, activity of biological neurons follows continuous-time dynamics in real, physical time, instead of operating on discrete temporal cycles abstracted away from real-time. Here, we present neuromorphic processing devices that emulate the biological style of processing by using parallel instances of mixed-signal analog/digital circuits that operate in real time. We argue that this approach brings significant advantages in efficiency of computation. We show examples of embodied neuromorphic agents that use such devices to interact with the environment and exhibit autonomous learning

    Compensating Inhomogeneities of Neuromorphic VLSI Devices Via Short-Term Synaptic Plasticity

    Get PDF
    Recent developments in neuromorphic hardware engineering make mixed-signal VLSI neural network models promising candidates for neuroscientific research tools and massively parallel computing devices, especially for tasks which exhaust the computing power of software simulations. Still, like all analog hardware systems, neuromorphic models suffer from a constricted configurability and production-related fluctuations of device characteristics. Since also future systems, involving ever-smaller structures, will inevitably exhibit such inhomogeneities on the unit level, self-regulation properties become a crucial requirement for their successful operation. By applying a cortically inspired self-adjusting network architecture, we show that the activity of generic spiking neural networks emulated on a neuromorphic hardware system can be kept within a biologically realistic firing regime and gain a remarkable robustness against transistor-level variations. As a first approach of this kind in engineering practice, the short-term synaptic depression and facilitation mechanisms implemented within an analog VLSI model of I&F neurons are functionally utilized for the purpose of network level stabilization. We present experimental data acquired both from the hardware model and from comparative software simulations which prove the applicability of the employed paradigm to neuromorphic VLSI devices

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Networks of spiking neurons and plastic synapses: implementation and control

    Get PDF
    The brain is an incredible system with a computational power that goes further beyond those of our standard computer. It consists of a network of 1011 neurons connected by about 1014 synapses: a massive parallel architecture that suggests that brain performs computation according to completely new strategies which we are far from understanding. To study the nervous system a reasonable starting point is to model its basic units, neurons and synapses, extract the key features, and try to put them together in simple controllable networks. The research group I have been working in focuses its attention on the network dynamics and chooses to model neurons and synapses at a functional level: in this work I consider network of integrate-and-fire neurons connected through synapses that are plastic and bistable. A synapses is said to be plastic when, according to some kind of internal dynamics, it is able to change the “strength”, the efficacy, of the connection between the pre- and post-synaptic neuron. The adjective bistable refers to the number of stable states of efficacy that a synapse can have; we consider synapses with two stable states: potentiated (high efficacy) or depressed (low efficacy). The considered synaptic model is also endowed with a new stop-learning mechanism particularly relevant when dealing with highly correlated patterns. The ability of this kind of systems of reproducing in simulation behaviors observed in biological networks, give sense to an attempt of implementing in hardware the studied network. This thesis situates at this point: the goal of this work is to design, control and test hybrid analog-digital, biologically inspired, hardware systems that behave in agreement with the theoretical and simulations predictions. This class of devices typically goes under the name of neuromorphic VLSI (Very-Large-Scale Integration). Neuromorphic engineering was born from the idea of designing bio-mimetic devices and represents a useful research strategy that contributes to inspire new models, stimulates the theoretical research and that proposes an effective way of implementing stand-alone power-efficient devices. In this work I present two chips, a prototype and a larger device, that are a step towards endowing VLSI, neuromorphic systems with autonomous learning capabilities adequate for not too simple statistics of the stimuli to be learnt. The main novel features of these chips are the implemented type of synaptic plasticity and the configurability of the synaptic connectivity. The reported experimental results demonstrate that the circuits behave in agreement with theoretical predictions and the advantages of the stop-learning synaptic plasticity when highly correlated patterns have to be learnt. The high degree of flexibility of these chips in the definition of the synaptic connectivity is relevant in the perspective of using such devices as building blocks of parallel, distributed multi-chip architectures that will allow to scale up the network dimensions to systems with interesting computational abilities capable to interact with real-world stimuli

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware
    • …
    corecore