2 research outputs found

    A Survey of Neuromorphic Computing and Neural Networks in Hardware

    Full text link
    Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history. We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications. We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled. The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

    Neuromorphic auditory computing: towards a digital, event-based implementation of the hearing sense for robotics

    Get PDF
    In this work, it is intended to advance on the development of the neuromorphic audio processing systems in robots through the implementation of an open-source neuromorphic cochlea, event-based models of primary auditory nuclei, and their potential use for real-time robotics applications. First, the main gaps when working with neuromorphic cochleae were identified. Among them, the accessibility and usability of such sensors can be considered as a critical aspect. Silicon cochleae could not be as flexible as desired for some applications. However, FPGA-based sensors can be considered as an alternative for fast prototyping and proof-of-concept applications. Therefore, a software tool was implemented for generating open-source, user-configurable Neuromorphic Auditory Sensor models that can be deployed in any FPGA, removing the aforementioned barriers for the neuromorphic research community. Next, the biological principles of the animals' auditory system were studied with the aim of continuing the development of the Neuromorphic Auditory Sensor. More specifically, the principles of binaural hearing were deeply studied for implementing event-based models to perform real-time sound source localization tasks. Two different approaches were followed to extract inter-aural time differences from event-based auditory signals. On the one hand, a digital, event-based design of the Jeffress model was implemented. On the other hand, a novel digital implementation of the Time Difference Encoder model was designed and implemented on FPGA. Finally, three different robotic platforms were used for evaluating the performance of the proposed real-time neuromorphic audio processing architectures. An audio-guided central pattern generator was used to control a hexapod robot in real-time using spiking neural networks on SpiNNaker. Then, a sensory integration application was implemented combining sound source localization and obstacle avoidance for autonomous robots navigation. Lastly, the Neuromorphic Auditory Sensor was integrated within the iCub robotic platform, being the first time that an event-based cochlea is used in a humanoid robot. Then, the conclusions obtained are presented and new features and improvements are proposed for future works.En este trabajo se pretende avanzar en el desarrollo de los sistemas de procesamiento de audio neurom贸rficos en robots a trav茅s de la implementaci贸n de una c贸clea neurom贸rfica de c贸digo abierto, modelos basados en eventos de los n煤cleos auditivos primarios, y su potencial uso para aplicaciones de rob贸tica en tiempo real. En primer lugar, se identificaron los principales problemas a la hora de trabajar con c贸cleas neurom贸rficas. Entre ellos, la accesibilidad y usabilidad de dichos sensores puede considerarse un aspecto cr铆tico. Los circuitos integrados anal贸gicos que implementan modelos cocleares pueden no pueden ser tan flexibles como se desea para algunas aplicaciones espec铆ficas. Sin embargo, los sensores basados en FPGA pueden considerarse una alternativa para el desarrollo r谩pido y flexible de prototipos y aplicaciones de prueba de concepto. Por lo tanto, en este trabajo se implement贸 una herramienta de software para generar modelos de sensores auditivos neurom贸rficos de c贸digo abierto y configurables por el usuario, que pueden desplegarse en cualquier FPGA, eliminando las barreras mencionadas para la comunidad de investigaci贸n neurom贸rfica. A continuaci贸n, se estudiaron los principios biol贸gicos del sistema auditivo de los animales con el objetivo de continuar con el desarrollo del Sensor Auditivo Neurom贸rfico (NAS). M谩s concretamente, se estudiaron en profundidad los principios de la audici贸n binaural con el fin de implementar modelos basados en eventos para realizar tareas de localizaci贸n de fuentes sonoras en tiempo real. Se siguieron dos enfoques diferentes para extraer las diferencias temporales interaurales de las se帽ales auditivas basadas en eventos. Por un lado, se implement贸 un dise帽o digital basado en eventos del modelo Jeffress. Por otro lado, se dise帽贸 una novedosa implementaci贸n digital del modelo de codificador de diferencias temporales y se implement贸 en FPGA. Por 煤ltimo, se utilizaron tres plataformas rob贸ticas diferentes para evaluar el rendimiento de las arquitecturas de procesamiento de audio neurom贸rfico en tiempo real propuestas. Se utiliz贸 un generador central de patrones guiado por audio para controlar un robot hex谩podo en tiempo real utilizando redes neuronales pulsantes en SpiNNaker. A continuaci贸n, se implement贸 una aplicaci贸n de integraci贸n sensorial que combina la localizaci贸n de fuentes de sonido y la evitaci贸n de obst谩culos para la navegaci贸n de robots aut贸nomos. Por 煤ltimo, se integr贸 el Sensor Auditivo Neurom贸rfico dentro de la plataforma rob贸tica iCub, siendo la primera vez que se utiliza una c贸clea basada en eventos en un robot humanoide. Por 煤ltimo, en este trabajo se presentan las conclusiones obtenidas y se proponen nuevas funcionalidades y mejoras para futuros trabajos
    corecore