170 research outputs found

    Finding the Gap:Neuromorphic Motion Vision in Cluttered Environments

    Get PDF
    Many animals meander in environments and avoid collisions. How the underlying neuronal machinery can yield robust behaviour in a variety of environments remains unclear. In the fly brain, motion-sensitive neurons indicate the presence of nearby objects and directional cues are integrated within an area known as the central complex. Such neuronal machinery, in contrast with the traditional stream-based approach to signal processing, uses an event-based approach, with events occurring when changes are sensed by the animal. Contrary to von Neumann computing architectures, event-based neuromorphic hardware is designed to process information in an asynchronous and distributed manner. Inspired by the fly brain, we model, for the first time, a neuromorphic closed-loop system mimicking essential behaviours observed in flying insects, such as meandering in clutter and gap crossing, which are highly relevant for autonomous vehicles. We implemented our system both in software and on neuromorphic hardware. While moving through an environment, our agent perceives changes in its surroundings and uses this information for collision avoidance. The agent's manoeuvres result from a closed action-perception loop implementing probabilistic decision-making processes. This loop-closure is thought to have driven the development of neural circuitry in biological agents since the Cambrian explosion. In the fundamental quest to understand neural computation in artificial agents, we come closer to understanding and modelling biological intelligence by closing the loop also in neuromorphic systems. As a closed-loop system, our system deepens our understanding of processing in neural networks and computations in biological and artificial systems. With these investigations, we aim to set the foundations for neuromorphic intelligence in the future, moving towards leveraging the full potential of neuromorphic systems.Comment: 7 main pages with two figures, including appendix 26 pages with 14 figure

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A silicon implementation of the fly's optomotor control system

    Get PDF
    Flies are capable of stabilizing their body during free flight by using visual motion information to estimate self-rotation. We have built a hardware model of this optomotor control system in a standard CMOS VLSI process. The result is a small, low-power chip that receives input directly from the real world through on-board photoreceptors and generates motor commands in real time. The chip was tested under closed-loop conditions typically used for insect studies. The silicon system exhibited stable control sufficiently analogous to the biological system to allow for quantitative comparisons

    Neuromorphic Sensory Integration for Combining Sound Source Localization and Collision Avoidance

    Get PDF
    Animals combine various sensory cues with previously acquired knowledge to safely travel towards a target destination. In close analogy to biological systems, we propose a neuromorphic system which decides, based on auditory and visual input, how to reach a sound source without collisions. The development of this sensory integration system, which identifies the shortest possible path, is a key achievement towards autonomous robotics. The proposed neuromorphic system comprises two event based sensors (the eDVS for vision and the NAS for audition) and the SpiNNaker processor. Open loop experiments were performed to evaluate the system performances. In the presence of acoustic stimulation alone, the heading direction points to the direction of the sound source with a Pearson correlation coefficient of 0.89. When visual input is introduced into the network the heading direction always points at the direction of null optical flow closest to the sound source. Hence, the sensory integration network is able to find the shortest path to the sound source while avoiding obstacles. This work shows that a simple, task dependent mapping of sensory information can lead to highly complex and robust decisions.Ministerio de EconomĂ­a y Competitividad TEC2016-77785-

    Closed-loop sound source localization in neuromorphic systems

    Get PDF
    Sound source localization (SSL) is used in various applications such as industrial noise-control, speech detection in mobile phones, speech enhancement in hearing aids and many more. Newest video conferencing setups use SSL. The position of a speaker is detected from the difference in the audio waves received by a microphone array. After detection the camera focuses onto the location of the speaker. The human brain is also able to detect the location of a speaker from auditory signals. It uses, among other cues, the difference in amplitude and arrival time of the sound wave at the two ears, called interaural level and time difference. However, the substrate and computational primitives of our brain are different from classical digital computing. Due to its low power consumption of around 20 W and its performance in real time the human brain has become a great source of inspiration for emerging technologies. One of these technologies is neuromorphic hardware which implements the fundamental principles of brain computing identified until today using complementary metal-oxide-semiconductor technologies and new devices. In this work we propose the first neuromorphic closed-loop robotic system that uses the interaural time difference for SSL in real time. Our system can successfully locate sound sources such as human speech. In a closed-loop experiment, the robotic platform turned immediately into the direction of the sound source with a turning velocity linearly proportional to the angle difference between sound source and binaural microphones. After this initial turn, the robotic platform remains at the direction of the sound source. Even though the system only uses very few resources of the available hardware, consumes around 1 W, and was only tuned by hand, meaning it does not contain any learning at all, it already reaches performances comparable to other neuromorphic approaches. The SSL system presented in this article brings us one step closer towards neuromorphic event-based systems for robotics and embodied computing

    Neuromorphic object localization using resistive memories and ultrasonic transducers

    Full text link
    Real-world sensory-processing applications require compact, low-latency, and low-power computing systems. Enabled by their in-memory event-driven computing abilities, hybrid memristive-Complementary Metal-Oxide Semiconductor neuromorphic architectures provide an ideal hardware substrate for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate an end-to-end sensory processing solution for a real-world object localization application. Drawing inspiration from the barn owl’s neuroanatomy, we developed a bio-inspired, event-driven object localization system that couples state-of-the-art piezoelectric micromachined ultrasound transducer sensors to a neuromorphic resistive memories-based computational map. We present measurement results from the fabricated system comprising resistive memories-based coincidence detectors, delay line circuits, and a full-custom ultrasound sensor. We use these experimental results to calibrate our system-level simulations. These simulations are then used to estimate the angular resolution and energy efficiency of the object localization model. The results reveal the potential of our approach, evaluated in orders of magnitude greater energy efficiency than a microcontroller performing the same task
    • …
    corecore