212 research outputs found

    Exploration of rank order coding with spiking neural networks for speech recognition

    Get PDF
    Speech recognition is very difficult in the context of noisy and corrupted speech. Most conventional techniques need huge databases to estimate speech (or noise) density probabilities to perform recognition. We discuss the potential of perceptive speech analysis and processing in combination with biologically plausible neural network processors. We illustrate the potential of such non-linear processing of speech by means of a preliminary test with recognition of French spoken digits from a small speech database

    The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

    Get PDF
    Neuromorphic computing, an emerging non-von Neumann computing mimicking the physical structure and signal processing technique of mammalian brains, potentially achieves the same level of computing and power efficiencies of mammalian brains. This chapter will discuss the state-of-the-art research trend on neuromorphic computing with memristors as electronic synapses. Furthermore, a novel three-dimensional (3D) neuromorphic computing architecture combining memristor and monolithic 3D integration technology would be introduced; such computing architecture has capabilities to reduce the system power consumption, provide high connectivity, resolve the routing congestion issues, and offer the massively parallel data processing. Moreover, the design methodology of applying the capacitance formed by the through-silicon vias (TSVs) to generate a membrane potential in 3D neuromorphic computing system would be discussed in this chapter

    Trajectory Prediction with Event-Based Cameras for Robotics Applications

    Get PDF
    This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks

    Rehaussement de la parole à l'aide d'un réseau de neurones à décharges

    Get PDF
    Le rehaussement de la parole est essentiel pour garantir la fiabilité des outils de communication ou la robustesse des systèmes de reconnaissance vocale. Bien que les réseaux neuronaux conventionnels, connus sous le nom de réseaux neuronaux artificiels (Artificial Neural Network - ANN), aient démontré des performances remarquables dans ce domaine, leur utilisation requiert une puissance de calcul considérable et engendre des coûts énergétiques élevés. Ces coûts sont dus à plusieurs facteurs tels que la taille du réseau, le volume de l'ensemble des données utilisé, et le nombre d'itérations nécessaires pour l'entraînement. Ce projet de recherche propose une approche de rehaussement de la parole à l'aide d'un réseau de neurones à décharges (Spiking Neural Network - SNN) basé sur une architecture U-Net. Les SNN sont adaptés au traitement de données avec une dimension temporelle, telle que la parole, et sont connus pour leur mise en œuvre économe en énergie sur des processeurs neuromorphiques. Par conséquent, les SNN constituent des candidats intéressants pour des applications en temps réel sur des dispositifs aux ressources limitées. L'objectif principal de ce travail est de développer un modèle basé sur un SNN présentant des performances comparables à celles d'un modèle basé sur un ANN pour le rehaussement de la parole. L'entraînement du SNN proposé s'effectue en utilisant une optimisation basée sur des gradients de substitution. L'évaluation des performances du modèle se fait à l'aide de tests objectifs perceptuels, en prenant en compte différents rapports signal sur bruit et conditions de bruit réelles. Les résultats obtenus démontrent que le modèle proposé surpasse la solution de référence du défi de suppression de bruit profond neuromorphique d'Intel. De plus, il se distingue également par rapport à plusieurs approches non neuromorphiques de l'état de l'art. En outre, il atteint des performances acceptables par rapport à un modèle ANN présentant une architecture similaire. En conclusion, ce travail met en évidence la promesse des SNN en tant qu'alternative performante aux ANN pour le rehaussement de la parole

    Inspiration from Neurosciences to emulate Cognitive Tasks at different Levels of Time

    Get PDF
    Colloque avec actes et comité de lecture. internationale.International audienceOur team has been working for more than ten years on the modelling of biologically inspired artificial neural networks. Today, our models are used to different cognitive tasks like autonomous behavior and exploration for a robot, planning, reasoning, and other tasks linked to memory and internal representation building. We present the framework that underlies these models through the time delays related to several fundamental properties like information coding, learning, planning, motivation

    Cortical Software Re-Use: A Computational Principle for Cognitive Development in Robots

    Get PDF
    The goal of this paper is to propose a candidate for consideration as a computational principle for cognitive development in autonomous robots. The candidate in question is the theory of Cortical Software Re-Use (CSRU) and we will make the case in this paper that it provides a mechanism for the incremental construction of cognitive and language systems from simpler sensory-motor components

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    • …
    corecore