14 research outputs found

    An AER handshake-less modular infrastructure PCB with x8 2.5Gbps LVDS serial links

    Get PDF
    Nowadays spike-based brain processing emulation is taking off. Several EU and others worldwide projects are demonstrating this, like SpiNNaker, BrainScaleS, FACETS, or NeuroGrid. The larger the brain process emulation on silicon is, the higher the communication performance of the hosting platforms has to be. Many times the bottleneck of these system implementations is not on the performance inside a chip or a board, but in the communication between boards. This paper describes a novel modular Address-Event-Representation (AER) FPGA-based (Spartan6) infrastructure PCB (the AER-Node board) with 2.5Gbps LVDS high speed serial links over SATA cables that offers a peak performance of 32-bit 62.5Meps (Mega events per second) on board-to-board communications. The board allows back compatibility with parallel AER devices supporting up to x2 28-bit parallel data with asynchronous handshake. These boards also allow modular expansion functionality through several daughter boards. The paper is focused on describing in detail the LVDS serial interface and presenting its performance.Ministerio de Ciencia e Innovación TEC2009-10639-C04-02/01Ministerio de Economía y Competitividad TEC2012-37868-C04-02/01Junta de Andalucía TIC-6091Ministerio de Economía y Competitividad PRI-PIMCHI-2011-076

    Live demonstration: Hardware implementation of convolutional STDP for on-line visual feature learning

    Get PDF
    We present live demonstration of a hardware that can learn visual features on-line and in real-time during presentation of objects. Input Spikes are coming from a bio-inspired silicon retina or Dynamic Vision Sensor (DVS) and are processed in a Spiking Convolutional Neural Network (SCNN) that is equipped with a Spike Timing Dependent Plasticity (STDP) learning rule implemented on FPGA

    Hardware Implementation of Convolutional STDP for On-line Visual Feature Learning

    Get PDF
    We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and training Fully Connected Classifiers in Supervised Mode. Examples are given for a 2-layer Spiking Neural System which learns in real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.EU H2020 grant 644096 “ECOMODE”EU H2020 grant 687299 “NEURAM3”Ministry of Economy and Competitivity (Spain) /European Regional Development Fund TEC2012-37868-C04-01 (BIOSENSE)Junta de Andalucía (España) TIC-6091 (NANONEURO

    Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach

    Get PDF
    Speech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.Ministerio de Economía y Competitividad TEC2016-77785-

    Digital desing for neuroporphic bio-inspired vision processing.

    Get PDF
    Artificial Intelligence (AI) is an exciting technology that flourished in this century. One of the goals for this technology is to give learning ability to computers. Currently, machine intelligence surpasses human intelligence in specific domains. Besides some conventional machine learning algorithms, Artificial Neural Networks (ANNs) is arguably the most exciting technology that is used to bring this intelligence to the computer world. Due to ANN’s advanced performance, increasing number of applications that need kind of intelligence are using ANN. Neuromorphic engineers are trying to introduce bio-inspired hardware for efficient implementation of neural networks. This hardware should be able to simulate a vast number of neurons in real-time with complex synaptic connectivity while consuming little power. The work that has been done in this thesis is hardware oriented, so it is necessary for the reader to have a good understanding of the hardware that is used for developments in this thesis. In this chapter, we provide a brief overview of the hardware platforms that are used in this thesis. Afterward, we explain briefly the contributions of this thesis to the bio-inspired processing research line

    Interfacing PDM MEMS Microphones with PFM Spiking Systems: Application for Neuromorphic Auditory Sensors

    Get PDF
    Neuromorphic computation processes sensors output in the spiking domain, which presents constraints in many cases when converting information to spikes, loosing, as example, temporal accuracy. This paper presents a spike-based system to adapt audio information from low-power pulse-density modulation (PDM) microelectromechanical systems microphones into rate coded spike frequencies. These spikes could be directly used by the neuromorphic auditory sensor (NAS) for frequency decomposition in different bands, avoiding the analog or digital conversion to spike streams. This improves the time response of the NAS, allowing its use in more time restrictive applications. This adaptation was conducted in VHDL as an interface for PDM microphones, converting their pulses into temporal distributed spikes following a pulse-frequency modulation scheme with an accurate inter-spike-interval, known as PDM to spikes interface (PSI). We introduce a new architecture of spike-based band-pass filter to reject DC components and distribute spikes in time. This was tested in two scenarios, first as a stand-alone circuit for its characterization, and then integrated with a NAS for verification. The PSI achieves a total harmonic distortion of −46.18 dB and a signal-to-noise ratio of 63.47 dB, demands less than 1% of the resources of a Spartan-6 FPGA and its power consumption is around 7 mW.Agencia Estatal de Investigación PID2019-105556GB-C33/AEI/10.13039/501100011033 (MINDROB)Ministerio de Ciencia, Innovación y Universidades PCI2019-111841-2 (CHIST-ERA SMALL

    ED-Scorbot: A Robotic test-bed Framework for FPGA-based Neuromorphic systems

    Get PDF
    Neuromorphic engineering is a growing and promising discipline nowadays. Neuro-inspiration and brain understanding applied to solve engineering problems is boosting new architectures, solutions and products today. The biological brain and neural systems process information at relatively low speeds through small components, called neurons, and it is impressive how they connect each other to construct complex architectures to solve in a quasi-instantaneous way visual and audio processing tasks, object detection and tracking, target approximation, grasping…, etc., with very low power. Neuromorphs are beginning to be very promising for a new era in the development of new sensors, processors, robots and software systems that mimic these biological systems. The event-driven Scorbot (EDScorbot) is a robotic arm plus a set of FPGA / microcontroller’s boards and a library of FPGA logic joined in a completely event-based framework (spike-based) from the sensors to the actuators. It is located in Seville (University of Seville) and can be used remotely. Spike-based commands, through neuro-inspired motor controllers, can be sent to the robot after visual processing object detection and tracking for grasping or manipulation, after complex visual and audio-visual sensory fusion, or after performing a learning task. Thanks to the cascade FPGA architecture through the Address-Event-Representation (AER) bus, supported by specialized boards, resources for algorithms implementation are not limited.Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130

    A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation

    Get PDF
    Convolutional Neural Networks (ConvNets) are a particular type of neural network often used for many applications like image recognition, video analysis or natural language processing. They are inspired by the human brain, following a specific organization of the connectivity pattern between layers of neurons known as receptive field. These networks have been traditionally implemented in software, but they are becoming more computationally expensive as they scale up, having limitations for real-time processing of high-speed stimuli. On the other hand, hardware implementations show difficulties to be used for different applications, due to their reduced flexibility. In this paper, we propose a fully configurable event-driven convolutional node with rate saturation mechanism that can be used to implement arbitrary ConvNets on FPGAs. This node includes a convolutional processing unit and a routing element which allows to build large 2D arrays where any multilayer structure can be implemented. The rate saturation mechanism emulates the refractory behavior in biological neurons, guaranteeing a minimum separation in time between consecutive events. A 4-layer ConvNet with 22 convolutional nodes trained for poker card symbol recognition has been implemented in a Spartan6 FPGA. This network has been tested with a stimulus where 40 poker cards were observed by a Dynamic Vision Sensor (DVS) in 1 s time. Different slow-down factors were applied to characterize the behavior of the system for high speed processing. For slow stimulus play-back, a 96% recognition rate is obtained with a power consumption of 0.85mW. At maximum play-back speed, a traffic control mechanism downsamples the input stimulus, obtaining a recognition rate above 63% when less than 20% of the input events are processed, demonstrating the robustness of the networkEuropean Union 644096, 687299Gobierno de España TEC2016-77785- P, TEC2015-63884-C2-1-PJunta de Andalucía TIC-6091, TICP120

    Closed-loop sound source localization in neuromorphic systems

    Get PDF
    Sound source localization (SSL) is used in various applications such as industrial noise-control, speech detection in mobile phones, speech enhancement in hearing aids and many more. Newest video conferencing setups use SSL. The position of a speaker is detected from the difference in the audio waves received by a microphone array. After detection the camera focuses onto the location of the speaker. The human brain is also able to detect the location of a speaker from auditory signals. It uses, among other cues, the difference in amplitude and arrival time of the sound wave at the two ears, called interaural level and time difference. However, the substrate and computational primitives of our brain are different from classical digital computing. Due to its low power consumption of around 20 W and its performance in real time the human brain has become a great source of inspiration for emerging technologies. One of these technologies is neuromorphic hardware which implements the fundamental principles of brain computing identified until today using complementary metal-oxide-semiconductor technologies and new devices. In this work we propose the first neuromorphic closed-loop robotic system that uses the interaural time difference for SSL in real time. Our system can successfully locate sound sources such as human speech. In a closed-loop experiment, the robotic platform turned immediately into the direction of the sound source with a turning velocity linearly proportional to the angle difference between sound source and binaural microphones. After this initial turn, the robotic platform remains at the direction of the sound source. Even though the system only uses very few resources of the available hardware, consumes around 1 W, and was only tuned by hand, meaning it does not contain any learning at all, it already reaches performances comparable to other neuromorphic approaches. The SSL system presented in this article brings us one step closer towards neuromorphic event-based systems for robotics and embodied computing
    corecore