133 research outputs found

    Embedding Multi-Task Address-Event- Representation Computation

    Get PDF
    Address-Event-Representation, AER, is a communication protocol that is intended to transfer neuronal spikes between bioinspired chips. There are several AER tools to help to develop and test AER based systems, which may consist of a hierarchical structure with several chips that transmit spikes among them in real-time, while performing some processing. Although these tools reach very high bandwidth at the AER communication level, they require the use of a personal computer to allow the higher level processing of the event information. We propose the use of an embedded platform based on a multi-task operating system to allow both, the AER communication and processing without the requirement of either a laptop or a computer. In this paper, we present and study the performance of an embedded multi-task AER tool, connecting and programming it for processing Address-Event information from a spiking generator.Ministerio de Ciencia e Innovación TEC2006-11730-C03-0

    Real-time Tracking Based on Neuromrophic Vision

    Full text link
    Real-time tracking is an important problem in computer vision in which most methods are based on the conventional cameras. Neuromorphic vision is a concept defined by incorporating neuromorphic vision sensors such as silicon retinas in vision processing system. With the development of the silicon technology, asynchronous event-based silicon retinas that mimic neuro-biological architectures has been developed in recent years. In this work, we combine the vision tracking algorithm of computer vision with the information encoding mechanism of event-based sensors which is inspired from the neural rate coding mechanism. The real-time tracking of single object with the advantage of high speed of 100 time bins per second is successfully realized. Our method demonstrates that the computer vision methods could be used for the neuromorphic vision processing and we can realize fast real-time tracking using neuromorphic vision sensors compare to the conventional camera

    Real time multiple objects tracking based on a bioinspired processing cascade architecture

    Get PDF
    This paper presents a cascade architecture for bio-inspired information processing. We use AER (Address Event Representation) for transmitting and processing visual information provided by an asynchronous temporal contrast silicon retina. Using this architecture, we also present a multiple objects tracking algorithm; this algorithm is described in VHDL and implemented in a FPGA (Spartan II), which is part of the USB-AER platform developed by some of the authors.Junta de Andalucía P06-TIC-02298Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Junta de Andalucía P06-TIC-0141

    EV-Planner: Energy-Efficient Robot Navigation via Event-Based Physics-Guided Neuromorphic Planner

    Full text link
    Vision-based object tracking is an essential precursor to performing autonomous aerial navigation in order to avoid obstacles. Biologically inspired neuromorphic event cameras are emerging as a powerful alternative to frame-based cameras, due to their ability to asynchronously detect varying intensities (even in poor lighting conditions), high dynamic range, and robustness to motion blur. Spiking neural networks (SNNs) have gained traction for processing events asynchronously in an energy-efficient manner. On the other hand, physics-based artificial intelligence (AI) has gained prominence recently, as they enable embedding system knowledge via physical modeling inside traditional analog neural networks (ANNs). In this letter, we present an event-based physics-guided neuromorphic planner (EV-Planner) to perform obstacle avoidance using neuromorphic event cameras and physics-based AI. We consider the task of autonomous drone navigation where the mission is to detect moving gates and fly through them while avoiding a collision. We use event cameras to perform object detection using a shallow spiking neural network in an unsupervised fashion. Utilizing the physical equations of the brushless DC motors present in the drone rotors, we train a lightweight energy-aware physics-guided neural network with depth inputs. This predicts the optimal flight time responsible for generating near-minimum energy paths. We spawn the drone in the Gazebo simulator and implement a sensor-fused vision-to-planning neuro-symbolic framework using Robot Operating System (ROS). Simulation results for safe collision-free flight trajectories are presented with performance analysis and potential future research direction

    The Morphological Computation Principles as a New Paradigm for Robotic Design

    Get PDF
    A theory, by definition, is a generalization of some phenomenon observations, and a principle is a law or a rule that should be followed as a guideline. Their formalization is a creative process, which faces specific and attested steps. The following sections reproduce this logical flow by expressing the principle of Morphological Computation as a timeline: firstly the observations of this phenomenon in Nature has been reported in relation with some recent theories, afterward it has been linked with the current applications in artificial systems and finally the further applications, challenges and objectives will project this principle into future scenarios

    An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations

    Get PDF
    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increase owing to the use of both ON and OFF events. AER data acquired by a DVS are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition.The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation

    Development of a Large-Scale Integrated Neurocognitive Architecture Part 1: Conceptual Framework

    Get PDF
    The idea of creating a general purpose machine intelligence that captures many of the features of human cognition goes back at least to the earliest days of artificial intelligence and neural computation. In spite of more than a half-century of research on this issue, there is currently no existing approach to machine intelligence that comes close to providing a powerful, general-purpose human-level intelligence. However, substantial progress made during recent years in neural computation, high performance computing, neuroscience and cognitive science suggests that a renewed effort to produce a general purpose and adaptive machine intelligence is timely, likely to yield qualitatively more powerful approaches to machine intelligence than those currently existing, and certain to lead to substantial progress in cognitive science, AI and neural computation. In this report, we outline a conceptual framework for the long-term development of a large-scale machine intelligence that is based on the modular organization, dynamics and plasticity of the human brain. Some basic design principles are presented along with a review of some of the relevant existing knowledge about the neurobiological basis of cognition. Three intermediate-scale prototypes for parts of a larger system are successfully implemented, providing support for the effectiveness of several of the principles in our framework. We conclude that a human-competitive neuromorphic system for machine intelligence is a viable long- term goal, but that for the short term, substantial integration with more standard symbolic methods as well as substantial research will be needed to make this goal achievable

    SpikeSEG : Spiking segmentation via STDP saliency mapping

    Get PDF
    Taking inspiration from the structure and behaviourof the human visual system and using the Transposed Convo-lution and Saliency Mapping methods of Convolutional NeuralNetworks (CNN), a spiking event-based image segmentationalgorithm, SpikeSEG is proposed. The approach makes use ofboth spike-based imaging and spike-based processing, where theimages are either standard images converted to spiking images orthey are generated directly from a neuromorphic event drivensensor, and then processed using a spiking fully convolutionalneural network. The spiking segmentation method uses the spikeactivations through time within the network to trace back anyoutputs from saliency maps, to the exact pixel location. Thisnot only gives exact pixel locations for spiking segmentation,but with low latency and computational overhead. SpikeSEGis the first spiking event-based segmentation network and overthree experiment test achieves promising results with 96%accuracy overall and a 74% mean intersection over union forthe segmentation, all within an event by event-based framework
    corecore