42 research outputs found

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; GRC2014/049Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Neuromorphic audio processing through real-time embedded spiking neural networks.

    Get PDF
    In this work novel speech recognition and audio processing systems based on a spiking artificial cochlea and neural networks are proposed and implemented. First, the biological behavior of the animal鈥檚 auditory system is analyzed and studied, along with the classical mechanisms of audio signal processing for sound classification, including Deep Learning techniques. Based on these studies, novel audio processing and automatic audio signal recognition systems are proposed, using a bio-inspired auditory sensor as input. A desktop software tool called NAVIS (Neuromorphic Auditory VIsualizer) for post-processing the information obtained from spiking cochleae was implemented, allowing to analyze these data for further research. Next, using a 4-chip SpiNNaker hardware platform and Spiking Neural Networks, a system is proposed for classifying different time-independent audio signals, making use of a Neuromorphic Auditory Sensor and frequency studies obtained with NAVIS. To prove the robustness and analyze the limitations of the system, the input audios were disturbed, simulating extreme noisy environments. Deep Learning mechanisms, particularly Convolutional Neural Networks, are trained and used to differentiate between healthy persons and pathological patients by detecting murmurs from heart recordings after integrating the spike information from the signals using a neuromorphic auditory sensor. Finally, a similar approach is used to train Spiking Convolutional Neural Networks for speech recognition tasks. A novel SCNN architecture for timedependent signals classification is proposed, using a buffered layer that adapts the information from a real-time input domain to a static domain. The system was deployed on a 48-chip SpiNNaker platform. Finally, the performance and efficiency of these systems were evaluated, obtaining conclusions and proposing improvements for future works.Premio Extraordinario de Doctorado U

    Neuromorphic auditory computing: towards a digital, event-based implementation of the hearing sense for robotics

    Get PDF
    In this work, it is intended to advance on the development of the neuromorphic audio processing systems in robots through the implementation of an open-source neuromorphic cochlea, event-based models of primary auditory nuclei, and their potential use for real-time robotics applications. First, the main gaps when working with neuromorphic cochleae were identified. Among them, the accessibility and usability of such sensors can be considered as a critical aspect. Silicon cochleae could not be as flexible as desired for some applications. However, FPGA-based sensors can be considered as an alternative for fast prototyping and proof-of-concept applications. Therefore, a software tool was implemented for generating open-source, user-configurable Neuromorphic Auditory Sensor models that can be deployed in any FPGA, removing the aforementioned barriers for the neuromorphic research community. Next, the biological principles of the animals' auditory system were studied with the aim of continuing the development of the Neuromorphic Auditory Sensor. More specifically, the principles of binaural hearing were deeply studied for implementing event-based models to perform real-time sound source localization tasks. Two different approaches were followed to extract inter-aural time differences from event-based auditory signals. On the one hand, a digital, event-based design of the Jeffress model was implemented. On the other hand, a novel digital implementation of the Time Difference Encoder model was designed and implemented on FPGA. Finally, three different robotic platforms were used for evaluating the performance of the proposed real-time neuromorphic audio processing architectures. An audio-guided central pattern generator was used to control a hexapod robot in real-time using spiking neural networks on SpiNNaker. Then, a sensory integration application was implemented combining sound source localization and obstacle avoidance for autonomous robots navigation. Lastly, the Neuromorphic Auditory Sensor was integrated within the iCub robotic platform, being the first time that an event-based cochlea is used in a humanoid robot. Then, the conclusions obtained are presented and new features and improvements are proposed for future works.En este trabajo se pretende avanzar en el desarrollo de los sistemas de procesamiento de audio neurom贸rficos en robots a trav茅s de la implementaci贸n de una c贸clea neurom贸rfica de c贸digo abierto, modelos basados en eventos de los n煤cleos auditivos primarios, y su potencial uso para aplicaciones de rob贸tica en tiempo real. En primer lugar, se identificaron los principales problemas a la hora de trabajar con c贸cleas neurom贸rficas. Entre ellos, la accesibilidad y usabilidad de dichos sensores puede considerarse un aspecto cr铆tico. Los circuitos integrados anal贸gicos que implementan modelos cocleares pueden no pueden ser tan flexibles como se desea para algunas aplicaciones espec铆ficas. Sin embargo, los sensores basados en FPGA pueden considerarse una alternativa para el desarrollo r谩pido y flexible de prototipos y aplicaciones de prueba de concepto. Por lo tanto, en este trabajo se implement贸 una herramienta de software para generar modelos de sensores auditivos neurom贸rficos de c贸digo abierto y configurables por el usuario, que pueden desplegarse en cualquier FPGA, eliminando las barreras mencionadas para la comunidad de investigaci贸n neurom贸rfica. A continuaci贸n, se estudiaron los principios biol贸gicos del sistema auditivo de los animales con el objetivo de continuar con el desarrollo del Sensor Auditivo Neurom贸rfico (NAS). M谩s concretamente, se estudiaron en profundidad los principios de la audici贸n binaural con el fin de implementar modelos basados en eventos para realizar tareas de localizaci贸n de fuentes sonoras en tiempo real. Se siguieron dos enfoques diferentes para extraer las diferencias temporales interaurales de las se帽ales auditivas basadas en eventos. Por un lado, se implement贸 un dise帽o digital basado en eventos del modelo Jeffress. Por otro lado, se dise帽贸 una novedosa implementaci贸n digital del modelo de codificador de diferencias temporales y se implement贸 en FPGA. Por 煤ltimo, se utilizaron tres plataformas rob贸ticas diferentes para evaluar el rendimiento de las arquitecturas de procesamiento de audio neurom贸rfico en tiempo real propuestas. Se utiliz贸 un generador central de patrones guiado por audio para controlar un robot hex谩podo en tiempo real utilizando redes neuronales pulsantes en SpiNNaker. A continuaci贸n, se implement贸 una aplicaci贸n de integraci贸n sensorial que combina la localizaci贸n de fuentes de sonido y la evitaci贸n de obst谩culos para la navegaci贸n de robots aut贸nomos. Por 煤ltimo, se integr贸 el Sensor Auditivo Neurom贸rfico dentro de la plataforma rob贸tica iCub, siendo la primera vez que se utiliza una c贸clea basada en eventos en un robot humanoide. Por 煤ltimo, en este trabajo se presentan las conclusiones obtenidas y se proponen nuevas funcionalidades y mejoras para futuros trabajos

    Brain-inspired methods for achieving robust computation in heterogeneous mixed-signal neuromorphic processing systems

    Get PDF
    Neuromorphic processing systems implementing spiking neural networks with mixed signal analog/digital electronic circuits and/or memristive devices represent a promising technology for edge computing applications that require low power, low latency, and that cannot connect to the cloud for off-line processing, either due to lack of connectivity or for privacy concerns. However, these circuits are typically noisy and imprecise, because they are affected by device-to-device variability, and operate with extremely small currents. So achieving reliable computation and high accuracy following this approach is still an open challenge that has hampered progress on the one hand and limited widespread adoption of this technology on the other. By construction, these hardware processing systems have many constraints that are biologically plausible, such as heterogeneity and non-negativity of parameters. More and more evidence is showing that applying such constraints to artificial neural networks, including those used in artificial intelligence, promotes robustness in learning and improves their reliability. Here we delve even more into neuroscience and present network-level brain-inspired strategies that further improve reliability and robustness in these neuromorphic systems: we quantify, with chip measurements, to what extent population averaging is effective in reducing variability in neural responses, we demonstrate experimentally how the neural coding strategies of cortical models allow silicon neurons to produce reliable signal representations, and show how to robustly implement essential computational primitives, such as selective amplification, signal restoration, working memory, and relational networks, exploiting such strategies. We argue that these strategies can be instrumental for guiding the design of robust and reliable ultra-low power electronic neural processing systems implemented using noisy and imprecise computing substrates such as subthreshold neuromorphic circuits and emerging memory technologies

    Modeling the Bat Spatial Navigation System: A Neuromorphic VLSI Approach

    Get PDF
    Autonomously navigating robots have long been a tough challenge facing engineers. The recent push to develop micro-aerial vehicles for practical military, civilian, and industrial use has added a significant power and time constraint to the challenge. In contrast, animals, from insects to humans, have been navigating successfully for millennia using a wide range of variants of the ultra-low-power computational system known as the brain. For this reason, we look to biological systems to inspire a solution suitable for autonomously navigating micro-aerial vehicles. In this dissertation, the focus is on studying the neurobiological structures involved in mammalian spatial navigation. The mammalian brain areas widely believed to contribute directly to navigation tasks are the Head Direction Cells, Grid Cells and Place Cells found in the post-subiculum, the medial entorhinal cortex, and the hippocampus, respectively. In addition to studying the neurobiological structures involved in navigation, we investigate various neural models that seek to explain the operation of these structures and adapt them to neuromorphic VLSI circuits and systems. We choose the neuromorphic approach for our systems because we are interested in understanding the interaction between the real-time, physical implementation of the algorithms and the real-world problem (robot and environment). By utilizing both analog and asynchronous digital circuits to mimic similar computations in neural systems, we envision very low power VLSI implementations suitable for providing practical solutions for spatial navigation in micro-aerial vehicles

    Chemical Bionics - a novel design approach using ion sensitive field effect transistors

    No full text
    In the late 1980s Carver Mead introduced Neuromorphic engineering in which various aspects of the neural systems of the body were modelled using VLSI1 circuits. As a result most bio-inspired systems to date concentrate on modelling the electrical behaviour of neural systems such as the eyes, ears and brain. The reality is however that biological systems rely on chemical as well as electrical principles in order to function. This thesis introduces chemical bionics in which the chemically-dependent physiology of specific cells in the body is implemented for the development of novel bio-inspired therapeutic devices. The glucose dependent pancreatic beta cell is shown to be one such cell, that is designed and fabricated to form the first silicon metabolic cell. By replicating the bursting behaviour of biological beta cells, which respond to changes in blood glucose, a bio-inspired prosthetic for glucose homeostasis of Type I diabetes is demonstrated. To compliment this, research to further develop the Ion Sensitive Field Effect Transistor (ISFET) on unmodified CMOS is also presented for use as a monolithic sensor for chemical bionic systems. Problems arising by using the native passivation of CMOS as a sensing surface are described and methods of compensation are presented. A model for the operation of the device in weak inversion is also proposed for exploitation of its physical primitives to make novel monolithic solutions. Functional implementations in various technologies is also detailed to allow future implementations chemical bionic circuits. Finally the ISFET integrate and fire neuron, which is the first of its kind, is presented to be used as a chemical based building block for many existing neuromorphic circuits. As an example of this a chemical imager is described for spatio-temporal monitoring of chemical species and an acid base discriminator for monitoring changes in concentration around a fixed threshold is also proposed

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore