108 research outputs found

    Efficient DMA transfers management on embedded Linux PSoC for Deep-Learning gestures recognition: Using Dynamic Vision Sensor and NullHop one-layer CNN accelerator to play RoShamBo

    Get PDF
    This demonstration shows a Dynamic Vision Sensor able to capture visual motion at a speed equivalent to a highspeed camera (20k fps). The collected visual information is presented as normalized histogram to a CNN accelerator hardware, called NullHop, that is able to process a pre-trained CNN to play Roshambo against a human. The CNN designed for this purpose consist of 5 convolutional layers and a fully connected layer. The latency for processing one histogram is 8ms. NullHop is deployed on the FPGA fabric of a PSoC from Xilinx, the Zynq 7100, which is based on a dual-core ARM computer and a Kintex-7 with 444K logic cells, integrated in the same chip. ARM computer is running Linux and a specific C++ controller is running the whole demo. This controller runs at user space in order to extract the maximum throughput thanks to an efficient use of the AXIStream, based of DMA transfers. This short delay needed to process one visual histogram, allows us to average several consecutive classification outputs. Therefore, it provides the best estimation of the symbol that the user presents to the visual sensor. This output is then mapped to present the winner symbol within the 60ms latency that the brain considers acceptable before thinking that there is a trick.Ministerio de Economía y Competitividad TEC2016-77785-

    Spike Events Processing for Vision Systems

    Get PDF
    In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper includes a few examples that have demonstrated the potential of this technology for highspeed vision processing, such as a multilayer event processing network of 5 sequential cortical-like layers, and a recognition system capable of discriminating propellers of different shape rotating at 5000 revolutions per second (300000 revolutions per minute)

    On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing

    Get PDF
    In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address–event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As a first test prototype, a pixel array of 16x16 has been implemented with programmable kernel size of up to 16x16. The chip has been fabricated in a standard 0.35- m complimentary metal–oxide–semiconductor (CMOS) process. The technique also allows to process larger size images by assembling 2-D arrays of such chips. Pixel operation exploits low-power mixed analog–digital circuit techniques. Because of the low currents involved (down to nanoamperes or even picoamperes), an important amount of pixel area is devoted to mismatch calibration. The rest of the chip uses digital circuit techniques, both synchronous and asynchronous. The fabricated chip has been thoroughly tested, both at the pixel level and at the system level. Specific computer interfaces have been developed for generating AER streams from conventional computers and feeding them as inputs to the convolution chip, and for grabbing AER streams coming out of the convolution chip and storing and analyzing them on computers. Extensive experimental results are provided. At the end of this paper, we provide discussions and results on scaling up the approach for larger pixel arrays and multilayer cortical AER systems.Commission of the European Communities IST-2001-34124 (CAVIAR)Commission of the European Communities 216777 (NABAB)Ministerio de Educación y Ciencia TIC-2000-0406-P4Ministerio de Educación y Ciencia TIC-2003-08164-C03-01Ministerio de Educación y Ciencia TEC2006-11730-C03-01Junta de Andalucía TIC-141

    Real-time motor rotation frequency detection with event-based visual and spike-based auditory AER sensory integration for FPGA

    Get PDF
    Multisensory integration is commonly used in various robotic areas to collect more environmental information using different and complementary types of sensors. Neuromorphic engineers mimics biological systems behavior to improve systems performance in solving engineering problems with low power consumption. This work presents a neuromorphic sensory integration scenario for measuring the rotation frequency of a motor using an AER DVS128 retina chip (Dynamic Vision Sensor) and a stereo auditory system on a FPGA completely event-based. Both of them transmit information with Address-Event-Representation (AER). This integration system uses a new AER monitor hardware interface, based on a Spartan-6 FPGA that allows two operational modes: real-time (up to 5 Mevps through USB2.0) and data logger mode (up to 20Mevps for 33.5Mev stored in onboard DDR RAM). The sensory integration allows reducing prediction error of the rotation speed of the motor since audio processing offers a concrete range of rpm, while DVS can be much more accurate.Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0

    Live Demonstration: Real-time motor rotation frequency detection by spike-based visual and auditory AER sensory integration for FPGA

    Get PDF
    Multisensory integration is commonly used in various robotic areas to collect much more information from an environment using different and complementary types of sensors. This demonstration presents a scenario where the motor rotation frequency is obtained using an AER DVS128 retina chip (Dynamic Vision Sensor) and a frequency decomposer auditory system on a FPGA that mimics a biological cochlea. Both of them are spike-based sensors with Address-Event-Representation (AER) outputs. A new AER monitor hardware interface, based on a Spartan-6 FPGA, allows two operational modes: real-time (up to 5 Mevps through USB2.0) and off-line mode (up to 20Mevps and 33.5Mev stored in DDR RAM). The sensory integration allows the bio-inspired cochlea limit to provide a concrete range of rpm approaches, which are obtained by the silicon retina.Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0

    LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading

    Full text link
    This paper presents a sensory fusion neuromorphic dataset collected with precise temporal synchronization using a set of Address-Event-Representation sensors and tools. The target application is the lip reading of several keywords for different machine learning applications, such as digits, robotic commands, and auxiliary rich phonetic short words. The dataset is enlarged with a spiking version of an audio-visual lip reading dataset collected with frame-based cameras. LIPSFUS is publicly available and it has been validated with a deep learning architecture for audio and visual classification. It is intended for sensory fusion architectures based on both artificial and spiking neural network algorithms.Comment: Submitted to ISCAS2023, 4 pages, plus references, github link provide

    AER Building Blocks for Multi-Layer Multi-Chip Neuromorphic Vision Systems

    Get PDF
    A 5-layer neuromorphic vision processor whose components communicate spike events asychronously using the address-eventrepresentation (AER) is demonstrated. The system includes a retina chip, two convolution chips, a 2D winner-take-all chip, a delay line chip, a learning classifier chip, and a set of PCBs for computer interfacing and address space remappings. The components use a mixture of analog and digital computation and will learn to classify trajectories of a moving object. A complete experimental setup and measurements results are shown.Unión Europea IST-2001-34124 (CAVIAR)Ministerio de Ciencia y Tecnología TIC-2003-08164-C0

    Intragastric Endoscopic Assisted Single Incision Surgery for Gastric Leiomyoma of the Esophagogastric Junction

    Get PDF
    Single port laparoscopic surgery is becoming an alternative to conventional laparoscopic surgery as a new approach where all the conventional ports are gathered in just one multichannel port through only one incision. Appling this technical development, we have developed a new technique based on an intragastric approach using a single port device assisted by endoscopy (I-EASI: intragastric endoscopic assisted single incision surgery) in order to remove benign gastric lesions and GIST tumors placed in the posterior wall of the stomach or close to the esophagogastric junction or the gastroduodenal junction. We present a patient with a submucosal gastric tumor placed near the esophagogastric junction removed with this new approach

    A Smart Electric Wheelchair Using UPnP

    Get PDF
    People with disabilities in general, and wheelchair users in particular, are one of the groups of people that may benefit more from Ambient Intelligent (AmI) Systems, enhancing their autonomy and quality of life. However, current wheelchairs are usually not equipped with devices capable of accessing services in AmI environments. In this paper, we describe how an electric wheelchair is equipped with an UPnP based module that allows the integration in AmI systems.Ministerio de Ciencia y Tecnología TIC2001-1868-C03-0

    Control neuromórfico del brazo robótico BIOROB del Citec de la Universidad de Bielefeld

    Get PDF
    Los sistemas neuronales biológicos responden a estímulos de una forma rápida y eficiente en el movimiento motor del cuerpo, comparado con los sistemas robóticos clásicos, los cuales requieren una capacidad de computación mucho más elevada. Una de las claves de estos sistemas es la codificación de la información en el dominio pulsante. Las neuronas se comunican por eventos con pequeños pulsos de corrientes producidas por intercambio de iones entre las dendritas y los axones de las mismas. La configuración en redes de neuronas permite no sólo el procesado de la información sensorial y su procesamiento en el dominio pulsante, sino también la propia actuación sobre los músculos en el formato pulsante. Este trabajo presenta la aplicación de un modelo de control motor basado en el procesado de pulsos, incluyendo la propia actuación sobre motores en el contexto de los pulsos. Se ha desarrollado un sistema de control en lazo cerrado por pulsos, denominado spikebased PID controller para FPGA, el cual se ha integrado en el esqueleto de un robot bioinspirado, BioRob X5 del CITEC de la Universidad de Bielefeld, para su uso en el desarrollo de modelos bioinspirados. El Robot, de más de 1m de largo, permite controlar las posiciones de las articulaciones usando control por pulsos y con un consumo menor de 1A para todos los grados de libertad funcionando al mismo tiempo.Compared to classic robotics, biological nervous systems respond to stimulus in a fast and efficient way regarding to the body motor movement. Classic robotic systems usually require higher computational capacity. One of the main keys of biological systems respect to robotic machines is the way the information is codded and transmitted. They use spikes. A neuron is the “basic” element that form biological nervous systems. Neurons communicate in an event-driven way through small current pulses (spikes) produced when ions are interchanged between dendrites and axons of different neurons. When neurons are arranged in networks, they allow not only the sensory information processing, but they also allow the actuation over the muscles in a spiking way. This paper presents the application of a motor control model based on spike processing, including the motor actuation in the spike domain. A close-loop control system, called spike-PID controller, has been developed for FPGA. This controller has been embedded into a bioinspired robot, called BioRob X5, at CITEC of the University of Bielefeld during a “Salvador de Madariaga” grant for a research visit in the july-september 2018 term. The robot, longer than 1 meter tall, allows the joint position control through spiking signals with a power consumption bellow 1A for the 4 DoF working at the same time.Ministerio de Educación y Ciencia (España)/FEDER. Proyecto COFNET TEC2016-77785-
    corecore