9,539 research outputs found

    A bio-inspired image coder with temporal scalability

    Full text link
    We present a novel bio-inspired and dynamic coding scheme for static images. Our coder aims at reproducing the main steps of the visual stimulus processing in the mammalian retina taking into account its time behavior. The main novelty of this work is to show how to exploit the time behavior of the retina cells to ensure, in a simple way, scalability and bit allocation. To do so, our main source of inspiration will be the biologically plausible retina model called Virtual Retina. Following a similar structure, our model has two stages. The first stage is an image transform which is performed by the outer layers in the retina. Here it is modelled by filtering the image with a bank of difference of Gaussians with time-delays. The second stage is a time-dependent analog-to-digital conversion which is performed by the inner layers in the retina. Thanks to its conception, our coder enables scalability and bit allocation across time. Also, our decoded images do not show annoying artefacts such as ringing and block effects. As a whole, this article shows how to capture the main properties of a biological system, here the retina, in order to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS 2011

    Synthetic retina for AER systems development

    Get PDF
    Neuromorphic engineering tries to mimic biology in information processing. Address-Event Representation (AER) is a neuromorphic communication protocol for spiking neurons between different layers. AER bio-inspired image sensor are called “retina”. This kind of sensors measure visual information not based on frames from real life and generates corresponding events. In this paper we provide an alternative, based on cheap FPGA, to this image sensors that takes images provided by an analog video source (video composite signal), digitalizes it and generates AER streams for testing purposes.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-0

    Real-time motor rotation frequency detection with event-based visual and spike-based auditory AER sensory integration for FPGA

    Get PDF
    Multisensory integration is commonly used in various robotic areas to collect more environmental information using different and complementary types of sensors. Neuromorphic engineers mimics biological systems behavior to improve systems performance in solving engineering problems with low power consumption. This work presents a neuromorphic sensory integration scenario for measuring the rotation frequency of a motor using an AER DVS128 retina chip (Dynamic Vision Sensor) and a stereo auditory system on a FPGA completely event-based. Both of them transmit information with Address-Event-Representation (AER). This integration system uses a new AER monitor hardware interface, based on a Spartan-6 FPGA that allows two operational modes: real-time (up to 5 Mevps through USB2.0) and data logger mode (up to 20Mevps for 33.5Mev stored in onboard DDR RAM). The sensory integration allows reducing prediction error of the rotation speed of the motor since audio processing offers a concrete range of rpm, while DVS can be much more accurate.Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0

    Streaming an image through the eye: The retina seen as a dithered scalable image coder

    Get PDF
    We propose the design of an original scalable image coder/decoder that is inspired from the mammalians retina. Our coder accounts for the time-dependent and also nondeterministic behavior of the actual retina. The present work brings two main contributions: As a first step, (i) we design a deterministic image coder mimicking most of the retinal processing stages and then (ii) we introduce a retinal noise in the coding process, that we model here as a dither signal, to gain interesting perceptual features. Regarding our first contribution, our main source of inspiration will be the biologically plausible model of the retina called Virtual Retina. The main novelty of this coder is to show that the time-dependent behavior of the retina cells could ensure, in an implicit way, scalability and bit allocation. Regarding our second contribution, we reconsider the inner layers of the retina. We emit a possible interpretation for the non-determinism observed by neurophysiologists in their output. For this sake, we model the retinal noise that occurs in these layers by a dither signal. The dithering process that we propose adds several interesting features to our image coder. The dither noise whitens the reconstruction error and decorrelates it from the input stimuli. Furthermore, integrating the dither noise in our coder allows a faster recognition of the fine details of the image during the decoding process. Our present paper goal is twofold. First, we aim at mimicking as closely as possible the retina for the design of a novel image coder while keeping encouraging performances. Second, we bring a new insight concerning the non-deterministic behavior of the retina.Comment: arXiv admin note: substantial text overlap with arXiv:1104.155

    Improving Texture Categorization with Biologically Inspired Filtering

    Full text link
    Within the domain of texture classification, a lot of effort has been spent on local descriptors, leading to many powerful algorithms. However, preprocessing techniques have received much less attention despite their important potential for improving the overall classification performance. We address this question by proposing a novel, simple, yet very powerful biologically-inspired filtering (BF) which simulates the performance of human retina. In the proposed approach, given a texture image, after applying a DoG filter to detect the "edges", we first split the filtered image into two "maps" alongside the sides of its edges. The feature extraction step is then carried out on the two "maps" instead of the input image. Our algorithm has several advantages such as simplicity, robustness to illumination and noise, and discriminative power. Experimental results on three large texture databases show that with an extremely low computational cost, the proposed method improves significantly the performance of many texture classification systems, notably in noisy environments. The source codes of the proposed algorithm can be downloaded from https://sites.google.com/site/nsonvu/code.Comment: 11 page

    From Vision Sensor to Actuators, Spike Based Robot Control through Address-Event-Representation

    Get PDF
    One field of the neuroscience is the neuroinformatic whose aim is to develop auto-reconfigurable systems that mimic the human body and brain. In this paper we present a neuro-inspired spike based mobile robot. From commercial cheap vision sensors converted into spike information, through spike filtering for object recognition, to spike based motor control models. A two wheel mobile robot powered by DC motors can be autonomously controlled to follow a line drown in the floor. This spike system has been developed around the well-known Address-Event-Representation mechanism to communicate the different neuro-inspired layers of the system. RTC lab has developed all the components presented in this work, from the vision sensor, to the robot platform and the FPGA based platforms for AER processing.Ministerio de Ciencia e Innovación TEC2006-11730-C03-02Junta de Andalucía P06-TIC-0141

    Building Blocks for Spikes Signals Processing

    Get PDF
    Neuromorphic engineers study models and implementations of systems that mimic neurons behavior in the brain. Neuro-inspired systems commonly use spikes to represent information. This representation has several advantages: its robustness to noise thanks to repetition, its continuous and analog information representation using digital pulses, its capacity of pre-processing during transmission time, ... , Furthermore, spikes is an efficient way, found by nature, to codify, transmit and process information. In this paper we propose, design, and analyze neuro-inspired building blocks that can perform spike-based analog filters used in signal processing. We present a VHDL implementation for FPGA. Presented building blocks take advantages of the spike rate coded representation to perform a massively parallel processing without complex hardware units, like floating point arithmetic units, or a large memory. Those low requirements of hardware allow the integration of a high number of blocks inside a FPGA, allowing to process fully in parallel several spikes coded signals.Junta de Andalucía P06-TIC-O1417Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Ministerio de Ciencia e Innovación TEC2006-11730-C03-0
    corecore