84 research outputs found

    An AER Spike-Processing Filter Simulator and Automatic VHDL Generator Based on Cellular Automata

    Get PDF
    Spike-based systems are neuro-inspired circuits implementations traditionally used for sensory systems or sensor signal processing. Address-Event- Representation (AER) is a neuromorphic communication protocol for transferring asynchronous events between VLSI spike-based chips. These neuro-inspired implementations allow developing complex, multilayer, multichip neuromorphic systems and have been used to design sensor chips, such as retinas and cochlea, processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata (CA) is a bio-inspired processing model for problem solving. This approach divides the processing synchronous cells which change their states at the same time in order to get the solution. This paper presents a software simulator able to gather several spike-based elements into the same workspace in order to test a CA architecture based on AER before a hardware implementation. Furthermore this simulator produces VHDL for testing the AER-CA into the FPGA of the USBAER AER-tool.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Spike Events Processing for Vision Systems

    Get PDF
    In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper includes a few examples that have demonstrated the potential of this technology for highspeed vision processing, such as a multilayer event processing network of 5 sequential cortical-like layers, and a recognition system capable of discriminating propellers of different shape rotating at 5000 revolutions per second (300000 revolutions per minute)

    From Vision Sensor to Actuators, Spike Based Robot Control through Address-Event-Representation

    Get PDF
    One field of the neuroscience is the neuroinformatic whose aim is to develop auto-reconfigurable systems that mimic the human body and brain. In this paper we present a neuro-inspired spike based mobile robot. From commercial cheap vision sensors converted into spike information, through spike filtering for object recognition, to spike based motor control models. A two wheel mobile robot powered by DC motors can be autonomously controlled to follow a line drown in the floor. This spike system has been developed around the well-known Address-Event-Representation mechanism to communicate the different neuro-inspired layers of the system. RTC lab has developed all the components presented in this work, from the vision sensor, to the robot platform and the FPGA based platforms for AER processing.Ministerio de Ciencia e Innovación TEC2006-11730-C03-02Junta de Andalucía P06-TIC-0141

    On the AER Convolution Processors for FPGA

    Get PDF
    Image convolution operations in digital computer systems are usually very expensive operations in terms of resource consumption (processor resources and processing time) for an efficient Real-Time application. In these scenarios the visual information is divided into frames and each one has to be completely processed before the next frame arrives in order to warranty the real-time. A spike-based philosophy for computing convolutions based on the neuro-inspired Address-Event- Representation (AER) is achieving high performances. In this paper we present two FPGA implementations of AER-based convolution processors for relatively small Xilinx FPGAs (Spartan-II 200 and Spartan-3 400), which process 64x64 images with 11x11 convolution kernels. The maximum equivalent operation rate that can be reached is 163.51 MOPS for 11x11 kernels, in a Xilinx Spartan 3 400 FPGA with a 50MHz clock. Formulations, hardware architecture, operation examples and performance comparison with frame-based convolution processors are presented and discussed.Ministerio de Ciencia e Innovación TEC2006-11730-C03-02Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Junta de Andalucía P06-TIC-0141

    FPGA Implementations Comparison of Neuro-cortical Inspired Convolution Processors for Spiking Systems

    Get PDF
    Image convolution operations in digital computer systems are usually very expensive operations in terms of resource consumption (processor resources and processing time) for an efficient Real-Time application. In these scenarios the visual information is divided in frames and each one has to be completely processed before the next frame arrives. Recently a new method for computing convolutions based on the neuro-inspired philosophy of spiking systems (Address-Event-Representation systems, AER) is achieving high performances. In this paper we present two FPGA implementations of AERbased convolution processors that are able to work with 64x64 images and programmable kernels of up to 11x11 elements. The main difference is the use of RAM for integrators in one solution and the absence of integrators in the second solution that is based on mapping operations. The maximum equivalent operation rate is 163.51 MOPS for 11x11 kernels, in a Xilinx Spartan 3 400 FPGA with a 50MHz clock. Formulations, hardware architecture, operation examples and performance comparison with frame-based convolution processors are presented and discussed.Ministerio de Ciencia e Innovación TEC2006-11730-C03-02Junta de Andalucía P06-TIC-0141

    Selective Change Driven Imaging: A Biomimetic Visual Sensing Strategy

    Get PDF
    Selective Change Driven (SCD) Vision is a biologically inspired strategy for acquiring, transmitting and processing images that significantly speeds up image sensing. SCD vision is based on a new CMOS image sensor which delivers, ordered by the absolute magnitude of its change, the pixels that have changed after the last time they were read out. Moreover, the traditional full frame processing hardware and programming methodology has to be changed, as a part of this biomimetic approach, to a new processing paradigm based on pixel processing in a data flow manner, instead of full frame image processing

    Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition

    Get PDF
    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.Ministerio de Educación y Ciencia TEC-2006-11730-C03-01Junta de Andalucía P06-TIC-01417European Union IST-2001-34124, 21677

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing

    Get PDF
    In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address–event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As a first test prototype, a pixel array of 16x16 has been implemented with programmable kernel size of up to 16x16. The chip has been fabricated in a standard 0.35- m complimentary metal–oxide–semiconductor (CMOS) process. The technique also allows to process larger size images by assembling 2-D arrays of such chips. Pixel operation exploits low-power mixed analog–digital circuit techniques. Because of the low currents involved (down to nanoamperes or even picoamperes), an important amount of pixel area is devoted to mismatch calibration. The rest of the chip uses digital circuit techniques, both synchronous and asynchronous. The fabricated chip has been thoroughly tested, both at the pixel level and at the system level. Specific computer interfaces have been developed for generating AER streams from conventional computers and feeding them as inputs to the convolution chip, and for grabbing AER streams coming out of the convolution chip and storing and analyzing them on computers. Extensive experimental results are provided. At the end of this paper, we provide discussions and results on scaling up the approach for larger pixel arrays and multilayer cortical AER systems.Commission of the European Communities IST-2001-34124 (CAVIAR)Commission of the European Communities 216777 (NABAB)Ministerio de Educación y Ciencia TIC-2000-0406-P4Ministerio de Educación y Ciencia TIC-2003-08164-C03-01Ministerio de Educación y Ciencia TEC2006-11730-C03-01Junta de Andalucía TIC-141

    A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems

    Get PDF
    We present a 32 32 pixels contrast retina microchip that provides its output as an address event representation (AER) stream. Spatial contrast is computed as the ratio between pixel photocurrent and a local average between neighboring pixels obtained with a diffuser network. This current-based computation produces an important amount of mismatch between neighboring pixels, because the currents can be as low as a few pico-amperes. Consequently, a compact calibration circuitry has been included to trimm each pixel. Measurements show a reduction in mismatch standard deviation from 57% to 6.6% (indoor light). The paper describes the design of the pixel with its spatial contrast computation and calibration sections. About one third of pixel area is used for a 5-bit calibration circuit. Area of pixel is 58 m 56 m, while its current consumption is about 20 nA at 1-kHz event rate. Extensive experimental results are provided for a prototype fabricated in a standard 0.35- m CMOS process.This work was supported by Spanish Research Grants TIC2003-08164-C03-01 (SAMANTA), TEC2006-11730-C03-01 (SAMANTA-II), and EU grant IST-2001-34124 (CAVIAR). JCS was supported by the I3P program of the Spanish Research Council. RSG was supported by a national grant from the Spanish Ministry of Education and Science.Peer reviewe
    corecore