437 research outputs found

    Programmable 2D image filter for AER vision processing

    Get PDF
    A VLSI architecture is proposed for the realization of real-time 2D image filtering in an address-event-representation (AER) vision system, The architecture is capable of implementing any convolutional kernel F(x, y) as long as it is decomposable into x-axis and y-axis components, i.e, F(x, y)=H(x)V(y), for some rotated coordinate system {x, y}, and if this product can be approximated safely by a signed minimum operation. The proposed architecture is intended to be used in a complete vision system, known as the boundary-contour-system and feature-contour-system (BCS-FCS) vision model

    An AER Spike-Processing Filter Simulator and Automatic VHDL Generator Based on Cellular Automata

    Get PDF
    Spike-based systems are neuro-inspired circuits implementations traditionally used for sensory systems or sensor signal processing. Address-Event- Representation (AER) is a neuromorphic communication protocol for transferring asynchronous events between VLSI spike-based chips. These neuro-inspired implementations allow developing complex, multilayer, multichip neuromorphic systems and have been used to design sensor chips, such as retinas and cochlea, processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata (CA) is a bio-inspired processing model for problem solving. This approach divides the processing synchronous cells which change their states at the same time in order to get the solution. This paper presents a software simulator able to gather several spike-based elements into the same workspace in order to test a CA architecture based on AER before a hardware implementation. Furthermore this simulator produces VHDL for testing the AER-CA into the FPGA of the USBAER AER-tool.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    On the AER Stereo-Vision Processing: A Spike Approach to Epipolar Matching

    Get PDF
    Image processing in digital computer systems usually considers visual information as a sequence of frames. These frames are from cameras that capture reality for a short period of time. They are renewed and transmitted at a rate of 25-30 fps (typical real-time scenario). Digital video processing has to process each frame in order to detect a feature on the input. In stereo vision, existing algorithms use frames from two digital cameras and process them pixel by pixel until it finds a pattern match in a section of both stereo frames. To process stereo vision information, an image matching process is essential, but it needs very high computational cost. Moreover, as more information is processed, the more time spent by the matching algorithm, the more inefficient it is. Spike-based processing is a relatively new approach that implements processing by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system is able to solve much more complex problems, such as visual recognition by manipulating neuron’s spikes. The spike-based philosophy for visual information processing based on the neuro-inspired Address-Event- Representation (AER) is achieving nowadays very high performances. The aim of this work is to study the viability of a matching mechanism in a stereo-vision system, using AER codification. This kind of mechanism has not been done before to an AER system. To do that, epipolar geometry basis applied to AER system are studied, and several tests are run, using recorded data and a computer. The results and an average error are shown (error less than 2 pixels per point); and the viability is proved

    A programmable VLSI filter architecture for application in real-time vision processing systems

    Get PDF
    An architecture is proposed for the realization of real-time edge-extraction filtering operation in an Address-Event-Representation (AER) vision system. Furthermore, the approach is valid for any 2D filtering operation as long as the convolutional kernel F(p,q) is decomposable into an x-axis and a y-axis component, i.e. F(p,q)=H(p)V(q), for some rotated coordinate system [p,q]. If it is possible to find a coordinate system [p,q], rotated with respect to the absolute coordinate system a certain angle, for which the above decomposition is possible, then the proposed architecture is able to perform the filtering operation for any angle we would like the kernel to be rotated. This is achieved by taking advantage of the AER and manipulating the addresses in real time. The proposed architecture, however, requires one approximation: the product operation between the horizontal component H(p) and vertical component V(q) should be able to be approximated by a signed minimum operation without significant performance degradation. It is shown that for edge-extraction applications this filter does not produce performance degradation. The proposed architecture is intended to be used in a complete vision system known as the Boundary-Contour-System and Feature-Contour-System Vision Model, proposed by Grossberg and collaborators. The present paper proposes the architecture, provides a circuit implementation using MOS transistors operated in weak inversion, and shows behavioral simulation results at the system level operation and electrical simulation and experimental results at the circuit level operation of some critical subcircuits

    Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors

    Get PDF
    Many advances have been made in the eld of computer vision. Several recent research trends have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a calibration process is usually implemented to improve the results accuracy. However, these systems generate a large amount of data to be processed; therefore, a powerful computer is required and, in many cases, this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that mimic the information processing that takes place in the human brain. This information is encoded using pulses (or spikes) and the generated systems are much simpler (in computational operations and resources), which allows them to perform similar tasks with much lower power consumption, thus these processes can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereovision system is presented, where a calibration mechanism for this system is implemented and evaluated using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system, implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining reduced latencies on hardware implementation for stand-alone systems, and working in real time.Ministerio de Economía y Competitividad TEC2016-77785-PMinisterio de Economía y Competitividad TIN2016-80644-

    Visual Spike-based Convolution Processing with a Cellular Automata Architecture

    Get PDF
    this paper presents a first approach for implementations which fuse the Address-Event-Representation (AER) processing with the Cellular Automata using FPGA and AER-tools. This new strategy applies spike-based convolution filters inspired by Cellular Automata for AER vision processing. Spike-based systems are neuro-inspired circuits implementations traditionally used for sensory systems or sensor signal processing. AER is a neuromorphic communication protocol for transferring asynchronous events between VLSI spike-based chips. These neuro-inspired implementations allow developing complex, multilayer, multichip neuromorphic systems and have been used to design sensor chips, such as retinas and cochlea, processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata is a bio-inspired processing model for problem solving. This approach divides the processing synchronous cells which change their states at the same time in order to get the solution.Ministerio de Educación y Ciencia TEC2006-11730-C03-02Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Junta de Andalucía P06-TIC-0141

    An Approach to Distance Estimation with Stereo Vision Using Address-Event-Representation

    Get PDF
    Image processing in digital computer systems usually considers the visual information as a sequence of frames. These frames are from cameras that capture reality for a short period of time. They are renewed and transmitted at a rate of 25-30 fps (typical real-time scenario). Digital video processing has to process each frame in order to obtain a result or detect a feature. In stereo vision, existing algorithms used for distance estimation use frames from two digital cameras and process them pixel by pixel to obtain similarities and differences from both frames; after that, depending on the scene and the features extracted, an estimate of the distance of the different objects of the scene is calculated. Spike-based processing is a relatively new approach that implements the processing by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system is able to solve much more complex problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy for visual information processing based on the neuro-inspired Address-Event-Representation (AER) is achieving nowadays very high performances. In this work we propose a two- DVS-retina system, composed of other elements in a chain, which allow us to obtain a distance estimation of the moving objects in a close environment. We will analyze each element of this chain and propose a Multi Hold&Fire algorithm that obtains the differences between both retinas.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    AER Building Blocks for Multi-Layer Multi-Chip Neuromorphic Vision Systems

    Get PDF
    A 5-layer neuromorphic vision processor whose components communicate spike events asychronously using the address-eventrepresentation (AER) is demonstrated. The system includes a retina chip, two convolution chips, a 2D winner-take-all chip, a delay line chip, a learning classifier chip, and a set of PCBs for computer interfacing and address space remappings. The components use a mixture of analog and digital computation and will learn to classify trajectories of a moving object. A complete experimental setup and measurements results are shown.Unión Europea IST-2001-34124 (CAVIAR)Ministerio de Ciencia y Tecnología TIC-2003-08164-C0

    Stereo Matching in Address-Event-Representation (AER) Bio-Inspired Binocular Systems in a Field-Programmable Gate Array (FPGA)

    Get PDF
    In stereo-vision processing, the image-matching step is essential for results, although it involves a very high computational cost. Moreover, the more information is processed, the more time is spent by the matching algorithm, and the more ine cient it is. Spike-based processing is a relatively new approach that implements processing methods by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system can solve much more complex problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy for visual information processing based on the neuro-inspired address-event-representation (AER) is currently achieving very high performance. The aim of this work was to study the viability of a matching mechanism in stereo-vision systems, using AER codification and its implementation in a field-programmable gate array (FPGA). Some studies have been done before in an AER system with monitored data using a computer; however, this kind of mechanism has not been implemented directly on hardware. To this end, an epipolar geometry basis applied to AER systems was studied and implemented, with other restrictions, in order to achieve good results in a real-time scenario. The results and conclusions are shown, and the viability of its implementation is proven.Ministerio de Economía y Competitividad TEC2016-77785-

    An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

    Get PDF
    Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 m CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
    corecore