170 research outputs found
A review of current neuromorphic approaches for vision, auditory, and olfactory sensors
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field
Interfacing of neuromorphic vision, auditory and olfactory sensors with digital neuromorphic circuits
The conventional Von Neumann architecture imposes strict constraints on the development of intelligent adaptive systems. The requirements of substantial computing power to process and analyse complex data make such an approach impractical to be used in implementing smart systems.
Neuromorphic engineering has produced promising results in applications such as electronic sensing, networking architectures and complex data processing. This interdisciplinary field takes inspiration from neurobiological architecture and emulates these characteristics using analogue Very Large Scale Integration (VLSI). The unconventional approach of exploiting the non-linear current characteristics of transistors has aided in the development of low-power adaptive systems that can be implemented in intelligent systems. The neuromorphic approach is widely applied in electronic sensing, particularly in vision, auditory, tactile and olfactory sensors. While conventional sensors generate a huge amount of redundant output data, neuromorphic sensors implement the biological concept of spike-based output to generate sparse output data that corresponds to a certain sensing event. The operation principle applied in these sensors supports reduced power consumption with operating efficiency comparable to conventional sensors. Although neuromorphic sensors such as Dynamic Vision Sensor (DVS), Dynamic and Active pixel Vision Sensor (DAVIS) and AEREAR2 are steadily expanding their scope of application in real-world systems, the lack of spike-based data processing algorithms and complex interfacing methods restricts its applications in low-cost standalone autonomous systems.
This research addresses the issue of interfacing between neuromorphic sensors and digital neuromorphic circuits. Current interfacing methods of these sensors are dependent on computers for output data processing. This approach restricts the portability of these sensors, limits their application in a standalone system and increases the overall cost of such systems. The proposed methodology simplifies the interfacing of these sensors with digital neuromorphic processors by utilizing AER communication protocols and neuromorphic hardware developed under the Convolution AER Vision Architecture for Real-time (CAVIAR) project. The proposed interface is simulated using a JAVA model that emulates a typical spikebased output of a neuromorphic sensor, in this case an olfactory sensor, and functions that process this data based on supervised learning. The successful implementation of this simulation suggests that the methodology is a practical solution and can be implemented in hardware. The JAVA simulation is compared to a similar model developed in Nengo, a standard large-scale neural simulation tool.
The successful completion of this research contributes towards expanding the scope of application of neuromorphic sensors in standalone intelligent systems. The easy interfacing method proposed in this thesis promotes the portability of these sensors by eliminating the dependency on computers for output data processing. The inclusion of neuromorphic Field Programmable Gate Array (FPGA) board allows reconfiguration and deployment of learning algorithms to implement adaptable systems. These low-power systems can be widely applied in biosecurity and environmental monitoring. With this thesis, we suggest directions for future research in neuromorphic standalone systems based on neuromorphic olfaction
Design of a silicon cochlea system with biologically faithful response
This paper presents the design and simulation results of a silicon cochlea system that has closely similar behavior as the real cochlea. A cochlea filter-bank based on the improved three-stage filter cascade structure is used to model the frequency decomposition function of the basilar membrane; a filter tuning block is designed to model the adaptive response of the cochlea; besides, an asynchronous event-triggered spike codec is employed as the system interface with bank-end spiking neural networks. As shown in the simulation results, the system has biologically faithful frequency response, impulse response, and active adaptation behavior; also the system outputs multiple
band-pass channels of spikes from which the original sound input can be recovered. The proposed silicon cochlea is feasible for analog VLSI implementation so that it not only emulates the way that sounds are preprocessed in human ears but also is able match the compact physical size of a real cochlea
Event-Driven Deep Neural Network Hardware System for Sensor Fusion
This paper presents a real-time multi-modal spiking Deep Neural Network (DNN) implemented on an FPGA platform. The hardware DNN system, called n-Minitaur, demonstrates a 4-fold improvement in computational speed over the previous DNN FPGA system. The proposed system directly interfaces two different event-based sensors: a Dynamic Vision Sensor (DVS) and a Dynamic Audio Sensor (DAS). The DNN for this bimodal hardware system is trained on the MNIST digit dataset and a set of unique audio tones for each digit. When tested on the spikes produced by each sensor alone, the classification accuracy is around 70% for DVS spikes generated in response to displayed MNIST images, and 60% for DAS spikes generated in response to noisy tones. The accuracy increases to 98% when spikes from both modalities are provided simultaneously. In addition, the system shows a fast latency response of only 5ms
Neuromorphic Sensory Integration for Combining Sound Source Localization and Collision Avoidance
Animals combine various sensory cues with previously
acquired knowledge to safely travel towards a target
destination. In close analogy to biological systems, we propose a
neuromorphic system which decides, based on auditory and visual
input, how to reach a sound source without collisions. The development
of this sensory integration system, which identifies the
shortest possible path, is a key achievement towards autonomous
robotics. The proposed neuromorphic system comprises two event
based sensors (the eDVS for vision and the NAS for audition) and
the SpiNNaker processor. Open loop experiments were performed
to evaluate the system performances. In the presence of acoustic
stimulation alone, the heading direction points to the direction
of the sound source with a Pearson correlation coefficient of
0.89. When visual input is introduced into the network the
heading direction always points at the direction of null optical
flow closest to the sound source. Hence, the sensory integration
network is able to find the shortest path to the sound source
while avoiding obstacles. This work shows that a simple, task
dependent mapping of sensory information can lead to highly
complex and robust decisions.Ministerio de Economía y Competitividad TEC2016-77785-
T-NGA: Temporal Network Grafting Algorithm for Learning to Process Spiking Audio Sensor Events
Spiking silicon cochlea sensors encode sound as an asynchronous stream of
spikes from different frequency channels. The lack of labeled training datasets
for spiking cochleas makes it difficult to train deep neural networks on the
outputs of these sensors. This work proposes a self-supervised method called
Temporal Network Grafting Algorithm (T-NGA), which grafts a recurrent network
pretrained on spectrogram features so that the network works with the cochlea
event features. T-NGA training requires only temporally aligned audio
spectrograms and event features. Our experiments show that the accuracy of the
grafted network was similar to the accuracy of a supervised network trained
from scratch on a speech recognition task using events from a software spiking
cochlea model. Despite the circuit non-idealities of the spiking silicon
cochlea, the grafted network accuracy on the silicon cochlea spike recordings
was only about 5% lower than the supervised network accuracy using the
N-TIDIGITS18 dataset. T-NGA can train networks to process spiking audio sensor
events in the absence of large labeled spike datasets.Comment: 5 pages, 4 figures; accepted at IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), Singapore, 202
An event-driven probabilistic model of sound source localization using cochlea spikes
This work presents a probabilistic model that estimates the location of sound sources using the output spikes of a silicon cochlea such as the Dynamic Audio Sensor. Unlike previous work which estimated the source locations directly from the interaural time differences (ITDs) extracted from the timing of the cochlea spikes, the spikes are used instead to support a distribution model of the ITDs representing possible locations of sound sources. Results on noisy single speaker recordings show average accuracies of approximately 80% on detecting the correct source locations and an estimation lag of <;100ms
Real-time neuro-inspired sound source localization and tracking architecture applied to a robotic platform
This paper proposes a real-time sound source localization and tracking architecture based on the abilityof the mammalian auditory system using the interaural intensity difference. We used an innovative bin- aural Neuromorphic Auditory Sensor to obtain spike rates similar to those generated by the inner haircells of the human auditory system. The design of the component that obtains the interaural intensitydifference is inspired by the lateral superior olive. The spike stream that represents the IID is used toturn a robotic platform towards the sound source direction. The architecture was implemented on FPGAdevices using general purpose FPGA resources and was tested with pure tones (1-kHz, 2.5-kHz and 5-kHzsounds) with an average error of 2.32 °. Our architecture demonstrates a potential practical application of sound localization for robots, and can be used to test paradigms for sound localization in the mammalianbrain.Ministerio de Economía y Competitividad TEC2016-77785-
Lip Reading Deep Network Exploiting Multi-Modal Spiking Visual and Auditory Sensors
This work presents a lip reading deep neural network that fuses the asynchronous spiking outputs of two bio-inspired silicon multimodal sensors: the Dynamic Vision Sensor (DVS) and the Dynamic Audio Sensor (DAS). The fusion network is tested on the GRID visual-audio lipreading dataset. Classification is carried out using event-based features generated from the spikes of the DVS and DAS. Networks are trained separately on the two modalities and also jointly trained on both modalities. The jointly trained network when tested on DVS spike frames alone, showed a relative increase in accuracy of around 23% over that of the single DVS modality network
Localization of sound sources : a systematic review
Sound localization is a vast field of research and advancement which is used in many useful applications to facilitate communication, radars, medical aid, and speech enhancement to but name a few. Many different methods are presented in recent times in this field to gain benefits. Various types of microphone arrays serve the purpose of sensing the incoming sound. This paper presents an overview of the importance of using sound localization in different applications along with the use and limitations of ad-hoc microphones over other microphones. In order to overcome these limitations certain approaches are also presented. Detailed explanation of some of the existing methods that are used for sound localization using microphone arrays in the recent literature is given. Existing methods are studied in a comparative fashion along with the factors that influence the choice
of one method over the others. This review is done in order to form a basis for choosing the best fit method for our use
- …