63,681 research outputs found

    Bio-inspired vision mimetics towards next generation collision avoidance automation

    Get PDF
    The current “deep learning + large-scale data + strong supervised labeling” technology framework of collision avoidance for ground robots and aerial drones is becoming saturated. Its development gradually faces challenges from real open-scene applications, including small data, weak annotation, and cross-scene. Inspired by the neural structure and processes underlying human cognition (e.g., human visual, auditory, and tactile systems) and the knowledge learned from daily driving tasks, such as, a high-level cognitive system is developed for integrating collision sensing and collision avoidance. This bio-inspired cognitive approach takes the advantages of good robustness, high self-adaptability, and low computation consumption in practical driving scenes

    An AER Spike-Processing Filter Simulator and Automatic VHDL Generator Based on Cellular Automata

    Get PDF
    Spike-based systems are neuro-inspired circuits implementations traditionally used for sensory systems or sensor signal processing. Address-Event- Representation (AER) is a neuromorphic communication protocol for transferring asynchronous events between VLSI spike-based chips. These neuro-inspired implementations allow developing complex, multilayer, multichip neuromorphic systems and have been used to design sensor chips, such as retinas and cochlea, processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata (CA) is a bio-inspired processing model for problem solving. This approach divides the processing synchronous cells which change their states at the same time in order to get the solution. This paper presents a software simulator able to gather several spike-based elements into the same workspace in order to test a CA architecture based on AER before a hardware implementation. Furthermore this simulator produces VHDL for testing the AER-CA into the FPGA of the USBAER AER-tool.Ministerio de Ciencia e InnovaciĂłn TEC2009-10639-C04-0

    A Bio-Inspired Two-Layer Mixed-Signal Flexible Programmable Chip for Early Vision

    Get PDF
    A bio-inspired model for an analog programmable array processor (APAP), based on studies on the vertebrate retina, has permitted the realization of complex programmable spatio-temporal dynamics in VLSI. This model mimics the way in which images are processed in the visual pathway, what renders a feasible alternative for the implementation of early vision tasks in standard technologies. A prototype chip has been designed and fabricated in 0.5 ÎŒm CMOS. It renders a computing power per silicon area and power consumption that is amongst the highest reported for a single chip. The details of the bio-inspired network model, the analog building block design challenges and trade-offs and some functional tests results are presented in this paper.Office of Naval Research (USA) N-000140210884European Commission IST-1999-19007Ministerio de Ciencia y TecnologĂ­a TIC1999-082

    Performance Study of Software AER-Based Convolutions on a Parallel Supercomputer

    Get PDF
    This paper is based on the simulation of a convolution model for bioinspired neuromorphic systems using the Address-Event-Representation (AER) philosophy and implemented in the supercomputer CRS of the University of Cadiz (UCA). In this work we improve the runtime of the simulation, by dividing an image into smaller parts before AER convolution and running each operation in a node of the cluster. This research involves a test cases design in which the optimal parameters are set to run the AER convolution in parallel processors. These cases consist on running the convolution taking an image divided in different number of parts, applying to each part a Sobel filter for edge detection, and based on the AER-TOOL simulator. Execution times are compared for all cases and the optimal configuration of the system is discussed. In general, CRS obtain better performances when the image is divided than for the whole image.Ministerio de Ciencia e InnovaciĂłn TEC2009-10639-C04-0

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    Visual Spike-based Convolution Processing with a Cellular Automata Architecture

    Get PDF
    this paper presents a first approach for implementations which fuse the Address-Event-Representation (AER) processing with the Cellular Automata using FPGA and AER-tools. This new strategy applies spike-based convolution filters inspired by Cellular Automata for AER vision processing. Spike-based systems are neuro-inspired circuits implementations traditionally used for sensory systems or sensor signal processing. AER is a neuromorphic communication protocol for transferring asynchronous events between VLSI spike-based chips. These neuro-inspired implementations allow developing complex, multilayer, multichip neuromorphic systems and have been used to design sensor chips, such as retinas and cochlea, processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata is a bio-inspired processing model for problem solving. This approach divides the processing synchronous cells which change their states at the same time in order to get the solution.Ministerio de EducaciĂłn y Ciencia TEC2006-11730-C03-02Ministerio de Ciencia e InnovaciĂłn TEC2009-10639-C04-02Junta de AndalucĂ­a P06-TIC-0141

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    • 

    corecore