29 research outputs found

    Digital desing for neuroporphic bio-inspired vision processing.

    Get PDF
    Artificial Intelligence (AI) is an exciting technology that flourished in this century. One of the goals for this technology is to give learning ability to computers. Currently, machine intelligence surpasses human intelligence in specific domains. Besides some conventional machine learning algorithms, Artificial Neural Networks (ANNs) is arguably the most exciting technology that is used to bring this intelligence to the computer world. Due to ANN’s advanced performance, increasing number of applications that need kind of intelligence are using ANN. Neuromorphic engineers are trying to introduce bio-inspired hardware for efficient implementation of neural networks. This hardware should be able to simulate a vast number of neurons in real-time with complex synaptic connectivity while consuming little power. The work that has been done in this thesis is hardware oriented, so it is necessary for the reader to have a good understanding of the hardware that is used for developments in this thesis. In this chapter, we provide a brief overview of the hardware platforms that are used in this thesis. Afterward, we explain briefly the contributions of this thesis to the bio-inspired processing research line

    Fast Pipeline 128x128 Pixel Spiking Convolution Core for Event-Driven Vision Processing in FPGAs

    Get PDF
    This paper describes a digital implementation of a parallel and pipelined spiking convolutional neural network (SConvNet) core for processing spikes in an event-driven system. Event-driven vision systems use typically as sensor some bioinspired spiking device, such as the popular Dynamic Vision Sensor (DVS). DVS cameras generate spikes related to changes in light intensity. In this paper we present a 2D convolution eventdriven processing core with 128×128 pixels. S-ConvNet is an Event-Driven processing method to extract event features from an input event flow. The nature of spiking systems is highly parallel, in general. Therefore, S-ConvNet processors can benefit from the parallelism offered by Field Programmable Gate Arrays (FPGAs) to accelerate the operation. Using 3 stages of pipeline and a parallel structure, results in updating the state of a 128 neuron row in just 12ns. This improves with respect to previously reported approaches.EU grant 604102 HBP (the Human Brain Project)EU grant 644096 ECOMODESpanish Ministry of Economy and Competitivity / European Regional Development Fund BIOSENSE TEC2012-37868-C04-02/01Junta de Andalucía (España) NANO-NEURO TIC-6091EU CHIST-ERA grant PNEUMA (PRI-PIMCHI-2011-0768

    Live demonstration: Hardware implementation of convolutional STDP for on-line visual feature learning

    Get PDF
    We present live demonstration of a hardware that can learn visual features on-line and in real-time during presentation of objects. Input Spikes are coming from a bio-inspired silicon retina or Dynamic Vision Sensor (DVS) and are processed in a Spiking Convolutional Neural Network (SCNN) that is equipped with a Spike Timing Dependent Plasticity (STDP) learning rule implemented on FPGA

    Hardware Implementation of Convolutional STDP for On-line Visual Feature Learning

    Get PDF
    We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and training Fully Connected Classifiers in Supervised Mode. Examples are given for a 2-layer Spiking Neural System which learns in real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.EU H2020 grant 644096 “ECOMODE”EU H2020 grant 687299 “NEURAM3”Ministry of Economy and Competitivity (Spain) /European Regional Development Fund TEC2012-37868-C04-01 (BIOSENSE)Junta de Andalucía (España) TIC-6091 (NANONEURO

    Active Perception with Dynamic Vision Sensors. Minimum Saccades with Optimum Recognition

    Get PDF
    Vision processing with Dynamic Vision Sensors (DVS) is becoming increasingly popular. This type of bio-inspired vision sensor does not record static scenes. DVS pixel activity relies on changes in light intensity. In this paper, we introduce a platform for object recognition with a DVS in which the sensor is installed on a moving pan-tilt unit in closed-loop with a recognition neural network. This neural network is trained to recognize objects observed by a DVS while the pan-tilt unit is moved to emulate micro-saccades. We show that performing more saccades in different directions can result in having more information about the object and therefore more accurate object recognition is possible. However, in high performance and low latency platforms, performing additional saccades adds additional latency and power consumption. Here we show that the number of saccades can be reduced while keeping the same recognition accuracy by performing intelligent saccadic movements, in a closed action-perception smart loop. We propose an algorithm for smart saccadic movement decisions that can reduce the number of necessary saccades to half, on average, for a predefined accuracy on the N-MNIST dataset. Additionally, we show that by replacing this control algorithm with an Artificial Neural Network that learns to control the saccades, we can also reduce to half the average number of saccades needed for N-MNIST recognition.EU H2020 grant 644096 ECOMODEEU H2020 grant 687299 NEURAM3Ministry of Economy and Competitivity (Spain) / European Regional Development Fund TEC2015-63884-C2-1-P (COGNET

    Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking

    Full text link
    The role of axonal synaptic delays in the efficacy and performance of artificial neural networks has been largely unexplored. In step-based analog-valued neural network models (ANNs), the concept is almost absent. In their spiking neuroscience-inspired counterparts, there is hardly a systematic account of their effects on model performance in terms of accuracy and number of synaptic operations.This paper proposes a methodology for accounting for axonal delays in the training loop of deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks on data with rich temporal dependencies. We then conduct an empirical study of the effects of axonal delays on model performance during inference for the Adding task, a benchmark for sequential regression, and for the Spiking Heidelberg Digits dataset (SHD), commonly used for evaluating event-driven models. Quantitative results on the SHD show that SNNs incorporating axonal delays instead of explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy while needing less than half trainable synapses. Additionally, we estimate the required memory in terms of total parameters and energy consumption of accomodating such delay-trained models on a modern neuromorphic accelerator. These estimations are based on the number of synaptic operations and the reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a reduced parameterization, which incorporates axonal delays, leads to approximately 90% energy and memory reduction in digital hardware implementations for a similar performance in the aforementioned task

    Live Demonstration: Multiplexing AER Asynchronous Channels over LVDS Links with Flow-Control and Clock- Correction for Scalable Neuromorphic Systems

    Get PDF
    In this live demonstration we exploit the use of a serial link for fast asynchronous communication in massively parallel processing platforms connected to a DVS for realtime implementation of bio-inspired vision processing on spiking neural networks

    Passive localization and detection of quadcopter UAVs by using Dynamic Vision Sensor

    Get PDF
    We present a new passive and low power localization method for quadcopter UAVs (Unmanned aerial vehicles) by using dynamic vision sensors. This method works by detecting the speed of rotation of propellers that is normally higher than the speed of movement of other objects in the background. Dynamic vision sensors are fast and power efficient. We have presented the algorithm along with the results of implementation

    Asynchronous spiking neurons, the natural key to exploit temporal sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms

    Performance Comparison of Time-Step-Driven versus Event-Driven Neural State Update Approaches in SpiNNaker

    Get PDF
    The SpiNNaker chip is a multi-core processor optimized for neuromorphic applications. Many SpiNNaker chips are assembled to make a highly parallel million core platform. This system can be used for simulation of a large number of neurons in real-time. SpiNNaker is using a general purpose ARM processor that gives a high amount of flexibility to implement different methods for processing spikes. Various libraries and packages are provided to translate a high-level description of Spiking Neural Networks (SNN) to low-level machine language that can be used in the ARM processors. In this paper, we introduce and compare three different methods to implement this intermediate layer of abstraction. We have examined the advantages of each method by various criteria, which can be useful for professional users to choose between them. All the codes that are used in this paper are available for academic propose.EU H2020 grant 644096 ECOMODEEU H2020 grant 687299 NEURAM3Ministry of Economy and Competitivity (Spain) / European Regional Development Fund TEC2015-63884-C2-1-P (COGNET
    corecore