830 research outputs found

    Six networks on a universal neuromorphic computing substrate

    Get PDF
    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality

    NimbleAI: towards neuromorphic sensing-processing 3D-integrated chips

    Get PDF
    The NimbleAI Horizon Europe project leverages key principles of energy-efficient visual sensing and processing in biological eyes and brains, and harnesses the latest advances in 33D stacked silicon integration, to create an integral sensing-processing neuromorphic architecture that efficiently and accurately runs computer vision algorithms in area-constrained endpoint chips. The rationale behind the NimbleAI architecture is: sense data only with high information value and discard data as soon as they are found not to be useful for the application (in a given context). The NimbleAI sensing-processing architecture is to be specialized after-deployment by tunning system-level trade-offs for each particular computer vision algorithm and deployment environment. The objectives of NimbleAI are: (1) 100x performance per mW gains compared to state-of-the-practice solutions (i.e., CPU/GPUs processing frame-based video); (2) 50x processing latency reduction compared to CPU/GPUs; (3) energy consumption in the order of tens of mWs; and (4) silicon area of approx. 50 mm 2 .NimbleAI has received funding from the EU’s Horizon Europe Research and Innovation programme (Grant Agreement 101070679), and by the UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee (Grant Agreement 10039070)Peer ReviewedArticle signat per 49 autors/es: Xabier Iturbe, IKERLAN, Basque Country (Spain); Nassim Abderrahmane, MENTA, France; Jaume Abella, Barcelona Supercomputing Center (BSC), Catalonia, Spain; Sergi Alcaide, Barcelona Supercomputing Center (BSC), Catalonia, Spain; Eric Beyne, IMEC, Belgium; Henri-Pierre Charles, CEA-LIST, University Grenoble Alpes, France; Christelle Charpin-Nicolle, CEALETI, Univ. Grenoble Alpes, France; Lars Chittka, Queen Mary University of London, UK; Angélica Dávila, IKERLAN, Basque Country (Spain); Arne Erdmann, Raytrix, Germany; Carles Estrada, IKERLAN, Basque Country (Spain); Ander Fernández, IKERLAN, Basque Country (Spain); Anna Fontanelli, Monozukuri (MZ Technologies), Italy; José Flich, Universitat Politecnica de Valencia, Spain; Gianluca Furano, ESA ESTEC, Netherlands; Alejandro Hernán Gloriani, Viewpointsystem, Austria; Erik Isusquiza, ULMA Medical Technologies, Basque Country (Spain); Radu Grosu, TU Wien, Austria; Carles Hernández, Universitat Politecnica de Valencia, Spain; Daniele Ielmini, Politecnico Milano, Italy; David Jackson, University of Manchester, UK; Maha Kooli, CEA-LIST, University Grenoble Alpes, France; Nicola Lepri, Politecnico Milano, Italy; Bernabé Linares-Barranco, CSIC, Spain; Jean-Loup Lachese, MENTA, France; Eric Laurent, MENTA, France; Menno Lindwer, GrAI Matter Labs (GML), Netherlands; Frank Linsenmaier, Viewpointsystem, Austria; Mikel Luján, University of Manchester, UK; Karel Masařík, CODASIP, Czech Republic; Nele Mentens, Universiteit Leiden, Netherlands; Orlando Moreira, GrAI Matter Labs (GML), Netherlands; Chinmay Nawghane, IMEC, Belgium; Luca Peres, University of Manchester, UK; Jean-Philippe Noel, CEA-LIST, University Grenoble Alpes, France; Arash Pourtaherian, GrAI Matter Labs (GML), Netherlands; Christoph Posch, PROPHESEE, France; Peter Priller, AVL List, Austria; Zdenek Prikryl, CODASIP, Czech Republic; Felix Resch, TU Wien, Austria; Oliver Rhodes, University of Manchester, UK; Todor Stefanov, Universiteit Leiden, Netherlands; Moritz Storring, IMEC, Belgium; Michele Taliercio, Monozukuri (MZ Technologies), Italy; Rafael Tornero, Universitat Politecnica de Valencia, Spain; Marcel van de Burgwal, IMEC, Belgium; Geert van der Plas, IMEC, Belgium; Elisa Vianello, CEALETI, Univ. Grenoble Alpes, France; Pavel Zaykov, CODASIP, Czech RepublicPostprint (author's final draft

    Synthesizing cognition in neuromorphic electronic systems

    Get PDF
    The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina

    Domain-Specific Computing Architectures and Paradigms

    Full text link
    We live in an exciting era where artificial intelligence (AI) is fundamentally shifting the dynamics of industries and businesses around the world. AI algorithms such as deep learning (DL) have drastically advanced the state-of-the-art cognition and learning capabilities. However, the power of modern AI algorithms can only be enabled if the underlying domain-specific computing hardware can deliver orders of magnitude more performance and energy efficiency. This work focuses on this goal and explores three parts of the domain-specific computing acceleration problem; encapsulating specialized hardware and software architectures and paradigms that support the ever-growing processing demand of modern AI applications from the edge to the cloud. This first part of this work investigates the optimizations of a sparse spatio-temporal (ST) cognitive system-on-a-chip (SoC). This design extracts ST features from videos and leverages sparse inference and kernel compression to efficiently perform action classification and motion tracking. The second part of this work explores the significance of dataflows and reduction mechanisms for sparse deep neural network (DNN) acceleration. This design features a dynamic, look-ahead index matching unit in hardware to efficiently discover fine-grained parallelism, achieving high energy efficiency and low control complexity for a wide variety of DNN layers. Lastly, this work expands the scope to real-time machine learning (RTML) acceleration. A new high-level architecture modeling framework is proposed. Specifically, this framework consists of a set of high-performance RTML-specific architecture design templates, and a Python-based high-level modeling and compiler tool chain for efficient cross-stack architecture design and exploration.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162870/1/lchingen_1.pd

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks

    Neuromorphic analogue VLSI

    Get PDF
    Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do

    Review of recent advances in frequency-domain near-infrared spectroscopy technologies [Invited]

    Get PDF
    Over the past several decades, near-infrared spectroscopy (NIRS) has become a popular research and clinical tool for non-invasively measuring the oxygenation of biological tissues, with particular emphasis on applications to the human brain. In most cases, NIRS studies are performed using continuous-wave NIRS (CW-NIRS), which can only provide information on relative changes in chromophore concentrations, such as oxygenated and deoxygenated hemoglobin, as well as estimates of tissue oxygen saturation. Another type of NIRS known as frequency-domain NIRS (FD-NIRS) has significant advantages: it can directly measure optical pathlength and thus quantify the scattering and absorption coefficients of sampled tissues and provide direct measurements of absolute chromophore concentrations. This review describes the current status of FD-NIRS technologies, their performance, their advantages, and their limitations as compared to other NIRS methods. Significant landmarks of technological progress include the development of both benchtop and portable/wearable FD-NIRS technologies, sensitive front-end photonic components, and high-frequency phase measurements. Clinical applications of FD-NIRS technologies are discussed to provide context on current applications and needed areas of improvement. The review concludes by providing a roadmap toward the next generation of fully wearable, low-cost FD-NIRS systems
    corecore