147 research outputs found

    Event-driven visual attention for the humanoid robot iCub.

    Get PDF
    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend

    Analog VLSI-Based Modeling of the Primate Oculomotor System

    Get PDF
    One way to understand a neurobiological system is by building a simulacrum that replicates its behavior in real time using similar constraints. Analog very large-scale integrated (VLSI) electronic circuit technology provides such an enabling technology. We here describe a neuromorphic system that is part of a long-term effort to understand the primate oculomotor system. It requires both fast sensory processing and fast motor control to interact with the world. A one-dimensional hardware model of the primate eye has been built that simulates the physical dynamics of the biological system. It is driven by two different analog VLSI chips, one mimicking cortical visual processing for target selection and tracking and another modeling brain stem circuits that drive the eye muscles. Our oculomotor plant demonstrates both smooth pursuit movements, driven by a retinal velocity error signal, and saccadic eye movements, controlled by retinal position error, and can reproduce several behavioral, stimulation, lesion, and adaptation experiments performed on primates

    FPGA Implementation of An Event-driven Saliency-based Selective Attention Model

    Full text link

    FPGA Implementation of An Event-driven Saliency-based Selective Attention Model

    Full text link
    Artificial vision systems of autonomous agents face very difficult challenges, as their vision sensors are required to transmit vast amounts of information to the processing stages, and to process it in real-time. One first approach to reduce data transmission is to use event-based vision sensors, whose pixels produce events only when there are changes in the input. However, even for event-based vision, transmission and processing of visual data can be quite onerous. Currently, these challenges are solved by using high-speed communication links and powerful machine vision processing hardware. But if resources are limited, instead of processing all the sensory information in parallel, an effective strategy is to divide the visual field into several small sub-regions, choose the region of highest saliency, process it, and shift serially the focus of attention to regions of decreasing saliency. This strategy, commonly used also by the visual system of many animals, is typically referred to as ``selective attention''. Here we present a digital architecture implementing a saliency-based selective visual attention model for processing asynchronous event-based sensory information received from a DVS. For ease of prototyping, we use a standard digital design flow and map the architecture on an FPGA. We describe the architecture block diagram highlighting the efficient use of the available hardware resources demonstrated through experimental results exploiting a hardware setup where the FPGA interfaced with the DVS camera.Comment: 5 pages, 5 figure

    SpikeSEG : Spiking segmentation via STDP saliency mapping

    Get PDF
    Taking inspiration from the structure and behaviourof the human visual system and using the Transposed Convo-lution and Saliency Mapping methods of Convolutional NeuralNetworks (CNN), a spiking event-based image segmentationalgorithm, SpikeSEG is proposed. The approach makes use ofboth spike-based imaging and spike-based processing, where theimages are either standard images converted to spiking images orthey are generated directly from a neuromorphic event drivensensor, and then processed using a spiking fully convolutionalneural network. The spiking segmentation method uses the spikeactivations through time within the network to trace back anyoutputs from saliency maps, to the exact pixel location. Thisnot only gives exact pixel locations for spiking segmentation,but with low latency and computational overhead. SpikeSEGis the first spiking event-based segmentation network and overthree experiment test achieves promising results with 96%accuracy overall and a 74% mean intersection over union forthe segmentation, all within an event by event-based framework

    Synthesizing cognition in neuromorphic electronic systems

    Get PDF
    The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina

    A Real-Time, Event Driven Neuromorphic System for Goal-Directed Attentional Selection

    Get PDF
    Computation with spiking neurons takes advantage of the abstraction of action potentials into streams of stereotypical events, which encode information through their timing. This approach both reduces power consumption and alleviates communication bottlenecks. A number of such spiking custom mixed-signal address event representation (AER) chips have been developed in recent years. In this paper, we present i) a flexible event-driven platform consisting of the integration of a visual AER sensor and the SpiNNaker system, a programmable massively parallel digital architecture oriented to the simulation of spiking neural networks; ii) the implementation of a neural network for feature-based attentional selection on this platfor

    PCA-RECT: An Energy-efficient Object Detection Approach for Event Cameras

    Full text link
    We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive properties. However, event-based object recognition systems are far behind their frame-based counterparts in terms of accuracy. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional dictionary representation when hardware resources are limited to implement dimensionality reduction. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance and relevance to state-of-the-art algorithms. Additionally, we verified the object detection method and real-time FPGA performance in lab settings under non-controlled illumination conditions with limited training data and ground truth annotations.Comment: Accepted in ACCV 2018 Workshops, to appea
    corecore