634 research outputs found

    Opening the “Black Box” of Silicon Chip Design in Neuromorphic Computing

    Get PDF
    Neuromorphic computing, a bio-inspired computing architecture that transfers neuroscience to silicon chip, has potential to achieve the same level of computation and energy efficiency as mammalian brains. Meanwhile, three-dimensional (3D) integrated circuit (IC) design with non-volatile memory crossbar array uniquely unveils its intrinsic vector-matrix computation with parallel computing capability in neuromorphic computing designs. In this chapter, the state-of-the-art research trend on electronic circuit designs of neuromorphic computing will be introduced. Furthermore, a practical bio-inspired spiking neural network with delay-feedback topology will be discussed. In the endeavor to imitate how human beings process information, our fabricated spiking neural network chip has capability to process analog signal directly, resulting in high energy efficiency with small hardware implementation cost. Mimicking the neurological structure of mammalian brains, the potential of 3D-IC implementation technique with memristive synapses is investigated. Finally, applications on the chaotic time series prediction and the video frame recognition will be demonstrated

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    NimbleAI: towards neuromorphic sensing-processing 3D-integrated chips

    Get PDF
    The NimbleAI Horizon Europe project leverages key principles of energy-efficient visual sensing and processing in biological eyes and brains, and harnesses the latest advances in 33D stacked silicon integration, to create an integral sensing-processing neuromorphic architecture that efficiently and accurately runs computer vision algorithms in area-constrained endpoint chips. The rationale behind the NimbleAI architecture is: sense data only with high information value and discard data as soon as they are found not to be useful for the application (in a given context). The NimbleAI sensing-processing architecture is to be specialized after-deployment by tunning system-level trade-offs for each particular computer vision algorithm and deployment environment. The objectives of NimbleAI are: (1) 100x performance per mW gains compared to state-of-the-practice solutions (i.e., CPU/GPUs processing frame-based video); (2) 50x processing latency reduction compared to CPU/GPUs; (3) energy consumption in the order of tens of mWs; and (4) silicon area of approx. 50 mm 2 .NimbleAI has received funding from the EU’s Horizon Europe Research and Innovation programme (Grant Agreement 101070679), and by the UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee (Grant Agreement 10039070)Peer ReviewedArticle signat per 49 autors/es: Xabier Iturbe, IKERLAN, Basque Country (Spain); Nassim Abderrahmane, MENTA, France; Jaume Abella, Barcelona Supercomputing Center (BSC), Catalonia, Spain; Sergi Alcaide, Barcelona Supercomputing Center (BSC), Catalonia, Spain; Eric Beyne, IMEC, Belgium; Henri-Pierre Charles, CEA-LIST, University Grenoble Alpes, France; Christelle Charpin-Nicolle, CEALETI, Univ. Grenoble Alpes, France; Lars Chittka, Queen Mary University of London, UK; Angélica Dávila, IKERLAN, Basque Country (Spain); Arne Erdmann, Raytrix, Germany; Carles Estrada, IKERLAN, Basque Country (Spain); Ander Fernández, IKERLAN, Basque Country (Spain); Anna Fontanelli, Monozukuri (MZ Technologies), Italy; José Flich, Universitat Politecnica de Valencia, Spain; Gianluca Furano, ESA ESTEC, Netherlands; Alejandro Hernán Gloriani, Viewpointsystem, Austria; Erik Isusquiza, ULMA Medical Technologies, Basque Country (Spain); Radu Grosu, TU Wien, Austria; Carles Hernández, Universitat Politecnica de Valencia, Spain; Daniele Ielmini, Politecnico Milano, Italy; David Jackson, University of Manchester, UK; Maha Kooli, CEA-LIST, University Grenoble Alpes, France; Nicola Lepri, Politecnico Milano, Italy; Bernabé Linares-Barranco, CSIC, Spain; Jean-Loup Lachese, MENTA, France; Eric Laurent, MENTA, France; Menno Lindwer, GrAI Matter Labs (GML), Netherlands; Frank Linsenmaier, Viewpointsystem, Austria; Mikel Luján, University of Manchester, UK; Karel Masařík, CODASIP, Czech Republic; Nele Mentens, Universiteit Leiden, Netherlands; Orlando Moreira, GrAI Matter Labs (GML), Netherlands; Chinmay Nawghane, IMEC, Belgium; Luca Peres, University of Manchester, UK; Jean-Philippe Noel, CEA-LIST, University Grenoble Alpes, France; Arash Pourtaherian, GrAI Matter Labs (GML), Netherlands; Christoph Posch, PROPHESEE, France; Peter Priller, AVL List, Austria; Zdenek Prikryl, CODASIP, Czech Republic; Felix Resch, TU Wien, Austria; Oliver Rhodes, University of Manchester, UK; Todor Stefanov, Universiteit Leiden, Netherlands; Moritz Storring, IMEC, Belgium; Michele Taliercio, Monozukuri (MZ Technologies), Italy; Rafael Tornero, Universitat Politecnica de Valencia, Spain; Marcel van de Burgwal, IMEC, Belgium; Geert van der Plas, IMEC, Belgium; Elisa Vianello, CEALETI, Univ. Grenoble Alpes, France; Pavel Zaykov, CODASIP, Czech RepublicPostprint (author's final draft

    Quantized Neural Networks and Neuromorphic Computing for Embedded Systems

    Get PDF
    Deep learning techniques have made great success in areas such as computer vision, speech recognition and natural language processing. Those breakthroughs made by deep learning techniques are changing every aspect of our lives. However, deep learning techniques have not realized their full potential in embedded systems such as mobiles, vehicles etc. because the high performance of deep learning techniques comes at the cost of high computation resource and energy consumption. Therefore, it is very challenging to deploy deep learning models in embedded systems because such systems have very limited computation resources and power constraints. Extensive research on deploying deep learning techniques in embedded systems has been conducted and considerable progress has been made. In this book chapter, we are going to introduce two approaches. The first approach is model compression, which is one of the very popular approaches proposed in recent years. Another approach is neuromorphic computing, which is a novel computing system that mimicks the human brain

    The Integration of Neuromorphic Computing in Autonomous Robotic Systems

    Get PDF
    Deep Neural Networks (DNNs) have come a long way in many cognitive tasks by training on large, labeled datasets. However, this method has problems in places with limited data and energy, like when planetary robots are used or when edge computing is used [1]. In contrast to this data-heavy approach, animals demonstrate an innate ability to learn by communicating with their environment and forming associative memories among events and entities, a process known as associative learning [2-4]. For instance, rats in a T-maze learn to associate different stimuli with outcomes through exploration without needing labeled data [5]. This learning paradigm is crucial to overcoming the challenges of deep learning in environments where data and energy are limited. Taking inspiration from this natural learning process, recent advancements [6, 7] have been made in implementing associative learning in artificial systems. This work introduces a pioneering approach by integrating associative learning utilizing an Unmanned Ground Vehicle (UGV) in conjunction with neuromorphic hardware, specifically the XyloA2TestBoard from SynSense, to facilitate online learning scenarios. The system simulates standard associative learning, like the spatial and memory learning observed in rats in a T-maze environment, without any pretraining or labeled datasets. The UGV, akin to the rats in a T-maze, autonomously learns the cause-and-effect relationships between different stimuli, such as visual cues and vibration or audio and visual cues, and demonstrates learned responses through movement. The neuromorphic robot in this system, equipped with SynSense’s neuromorphic chip, processes audio signals with a specialized Spiking Neural Network (SNN) and neural assembly, employing the Hebbian learning rule to adjust synaptic weights throughout the learning period. The XyloA2TestBoard uses little power (17.96 µW on average for logic Analog Front End (AFE) and 213.94 µW for IO circuitry), which shows that neuromorphic chips could work well in places with limited energy, offering a promising direction for advancing associative learning in artificial systems

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    • …
    corecore