10 research outputs found
Selective Attention in Multi-Chip Address-Event Systems
Selective attention is the strategy used by biological systems to cope with the inherent limits in their available computational resources, in order to efficiently process sensory information. The same strategy can be used in artificial systems that have to process vast amounts of sensory data with limited resources. In this paper we present a neuromorphic VLSI device, the “Selective Attention Chip” (SAC), which can be used to implement these models in multi-chip address-event systems. We also describe a real-time sensory-motor system, which integrates the SAC with a dynamic vision sensor and a robotic actuator. We present experimental results from each component in the system, and demonstrate how the complete system implements a real-time stimulus-driven selective attention model
Wireless Sensor Technologies and Applications
Recent years have witnessed tremendous advances in the design and applications of wirelessly networked and embedded sensors. Wireless sensor nodes are typically low-cost, low-power, small devices equipped with limited sensing, data processing and wireless communication capabilities, as well as power supplies. They leverage the concept of wireless sensor networks (WSNs), in which a large (possibly huge) number of collaborative sensor nodes could be deployed. As an outcome of the convergence of micro-electro-mechanical systems (MEMS) technology, wireless communications, and digital electronics, WSNs represent a significant improvement over traditional sensors. In fact, the rapid evolution of WSN technology has accelerated the development and deployment of various novel types of wireless sensors, e.g., multimedia sensors. Fulfilling Moore’s law, wireless sensors are becoming smaller and cheaper, and at the same time more powerful and ubiquitous. [...
FPGA Implementation of An Event-driven Saliency-based Selective Attention Model
Artificial vision systems of autonomous agents face very difficult
challenges, as their vision sensors are required to transmit vast amounts of
information to the processing stages, and to process it in real-time. One first
approach to reduce data transmission is to use event-based vision sensors,
whose pixels produce events only when there are changes in the input. However,
even for event-based vision, transmission and processing of visual data can be
quite onerous. Currently, these challenges are solved by using high-speed
communication links and powerful machine vision processing hardware. But if
resources are limited, instead of processing all the sensory information in
parallel, an effective strategy is to divide the visual field into several
small sub-regions, choose the region of highest saliency, process it, and shift
serially the focus of attention to regions of decreasing saliency. This
strategy, commonly used also by the visual system of many animals, is typically
referred to as ``selective attention''. Here we present a digital architecture
implementing a saliency-based selective visual attention model for processing
asynchronous event-based sensory information received from a DVS. For ease of
prototyping, we use a standard digital design flow and map the architecture on
an FPGA. We describe the architecture block diagram highlighting the efficient
use of the available hardware resources demonstrated through experimental
results exploiting a hardware setup where the FPGA interfaced with the DVS
camera.Comment: 5 pages, 5 figure
Event-driven visual attention for the humanoid robot iCub.
Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend
Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites
This paper presents a spike-based model which employs neurons with
functionally distinct dendritic compartments for classifying high dimensional
binary patterns. The synaptic inputs arriving on each dendritic subunit are
nonlinearly processed before being linearly integrated at the soma, giving the
neuron a capacity to perform a large number of input-output mappings. The model
utilizes sparse synaptic connectivity; where each synapse takes a binary value.
The optimal connection pattern of a neuron is learned by using a simple
hardware-friendly, margin enhancing learning algorithm inspired by the
mechanism of structural plasticity in biological neurons. The learning
algorithm groups correlated synaptic inputs on the same dendritic branch. Since
the learning results in modified connection patterns, it can be incorporated
into current event-based neuromorphic systems with little overhead. This work
also presents a branch-specific spike-based version of this structural
plasticity rule. The proposed model is evaluated on benchmark binary
classification problems and its performance is compared against that achieved
using Support Vector Machine (SVM) and Extreme Learning Machine (ELM)
techniques. Our proposed method attains comparable performance while utilizing
10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio
Selective Change Driven Imaging: A Biomimetic Visual Sensing Strategy
Selective Change Driven (SCD) Vision is a biologically inspired strategy for acquiring, transmitting and processing images that significantly speeds up image sensing. SCD vision is based on a new CMOS image sensor which delivers, ordered by the absolute magnitude of its change, the pixels that have changed after the last time they were read out. Moreover, the traditional full frame processing hardware and programming methodology has to be changed, as a part of this biomimetic approach, to a new processing paradigm based on pixel processing in a data flow manner, instead of full frame image processing
Synthesizing cognition in neuromorphic electronic systems
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive