66 research outputs found

    Interfacing PDM sensors with PFM spiking systems: application for Neuromorphic Auditory Sensors

    Get PDF
    In this paper we present a sub-system to convert audio information from low-power MEMS microphones with pulse density modulation (PDM) output into rate coded spike streams. These spikes represent the input signal of a Neuromorphic Auditory Sensor (NAS), which is implemented with Spike Signal Processing (SSP) building blocks. For this conversion, we have designed a HDL component for FPGA able to interface with PDM microphones and converts their pulses to temporal distributed spikes following a pulse frequency modulation (PFM) scheme with an accurate configurable Inter-Spike-Interval. The new FPGA component has been tested in two scenarios, first as a stand-alone circuit for its characterization, and then it has been integrated with a full NAS design to verify its behavior. This PDM interface demands less than 1% of a Spartan 6 FPGA resources and has a power consumption below 5mW.Ministerio de Economía y Competitividad TEC2016-77785-

    Live Demonstration: Neuromorphic Row-by-Row Multi-convolution FPGA Processor-SpiNNaker architecture for Dynamic-Vision Feature Extraction

    Get PDF
    In this demonstration a spiking neural network architecture for vision recognition using an FPGA spiking convolution processor, based on leaky integrate and fire neurons (LIF) and a SpiNNaker board is presented. The network has been trained with Poker-DVS dataset in order to classify the four different card symbols. The spiking convolution processor extracts features from images in form of spikes, computes by one layer of 64 convolutions. These features are sent to an OKAERtool board that converts from AER to 2-7 protocol to be classified by a spiking neural network deployed on a SpiNNaker platform

    Event-based Row-by-Row Multi-convolution engine for Dynamic-Vision Feature Extraction on FPGA

    Get PDF
    Neural networks algorithms are commonly used to recognize patterns from different data sources such as audio or vision. In image recognition, Convolutional Neural Networks are one of the most effective techniques due to the high accuracy they achieve. This kind of algorithms require billions of addition and multiplication operations over all pixels of an image. However, it is possible to reduce the number of operations using other computer vision techniques rather than frame-based ones, e.g. neuromorphic frame-free techniques. There exists many neuromorphic vision sensors that detect pixels that have changed their luminosity. In this study, an event-based convolution engine for FPGA is presented. This engine models an array of leaky integrate and fire neurons. It is able to apply different kernel sizes, from 1x1 to 7x7, which are computed row by row, with a maximum number of 64 different convolution kernels. The design presented is able to process 64 feature maps of 7x7 with a latency of 8.98 s.Ministerio de Economía y Competitividad TEC2016-77785-

    Accuracy Improvement of Neural Networks Through Self-Organizing-Maps over Training Datasets

    Get PDF
    Although it is not a novel topic, pattern recognition has become very popular and relevant in the last years. Different classification systems like neural networks, support vector machines or even complex statistical methods have been used for this purpose. Several works have used these systems to classify animal behavior, mainly in an offline way. Their main problem is usually the data pre-processing step, because the better input data are, the higher may be the accuracy of the classification system. In previous papers by the authors an embedded implementation of a neural network was deployed on a portable device that was placed on animals. This approach allows the classification to be done online and in real time. This is one of the aims of the research project MINERVA, which is focused on monitoring wildlife in Do˜nana National Park using low power devices. Many difficulties were faced when pre-processing methods quality needed to be evaluated. In this work, a novel pre-processing evaluation system based on self-organizing maps (SOM) to measure the quality of the neural network training dataset is presented. The paper is focused on a three different horse gaits classification study. Preliminary results show that a better SOM output map matches with the embedded ANN classification hit improvement.Junta de Andalucía P12-TIC-1300Ministerio de Economía y Competitividad TEC2016-77785-

    Live Demonstration: neuromorphic robotics, from audio to locomotion through spiking CPG on SpiNNaker.

    Get PDF
    This live demonstration presents an audio-guided neuromorphic robot: from a Neuromorphic Auditory Sensor (NAS) to locomotion using Spiking Central Pattern Generators (sCPGs). Several gaits are generated by sCPGs implemented on a SpiNNaker board. The output of these sCPGs is sent in a real-time manner to an Field Programmable Gate Array (FPGA) board using an AER-to-SpiNN interface. The control of the hexapod robot joints is performed by the FPGA board. The robot behavior can be changed in real-time by means of the NAS. The audio information is sent to the SpiNNaker board which classifies it using a Spiking Neural Network (SNN). Thus, the input sound will activate a specific gait pattern which will eventually modify the behavior of the robot.Ministerio de Economía y Competitividad TEC2016-77785-

    Semi-wildlife gait patterns classification using Statistical Methods and Artificial Neural Networks

    Get PDF
    Several studies have focused on classifying behavioral patterns in wildlife and captive species to monitor their activities and so to understanding the interactions of animals and control their welfare, for biological research or commercial purposes. The use of pattern recognition techniques, statistical methods and Overall Dynamic Body Acceleration (ODBA) are well known for animal behavior recognition tasks. The reconfigurability and scalability of these methods are not trivial, since a new study has to be done when changing any of the configuration parameters. In recent years, the use of Artificial Neural Networks (ANN) has increased for this purpose due to the fact that they can be easily adapted when new animals or patterns are required. In this context, a comparative study between a theoretical research is presented, where statistical and spectral analyses were performed and an embedded implementation of an ANN on a smart collar device was placed on semi-wild animals. This system is part of a project whose main aim is to monitor wildlife in real time using a wireless sensor network infrastructure. Different classifiers were tested and compared for three different horse gaits. Experimental results in a real time scenario achieved an accuracy of up to 90.7%, proving the efficiency of the embedded ANN implementation.Junta de Andalucía P12-TIC-1300Ministerio de Economía y Competitividad TEC2016-77785-

    Multilayer Spiking Neural Network for Audio Samples Classification Using SpiNNaker

    Get PDF
    Audio classification has always been an interesting subject of research inside the neuromorphic engineering field. Tools like Nengo or Brian, and hardware platforms like the SpiNNaker board are rapidly increasing in popularity in the neuromorphic community due to the ease of modelling spiking neural networks with them. In this manuscript a multilayer spiking neural network for audio samples classification using SpiNNaker is presented. The network consists of different leaky integrate-and-fire neuron layers. The connections between them are trained using novel firing rate based algorithms and tested using sets of pure tones with frequencies that range from 130.813 to 1396.91 Hz. The hit rate percentage values are obtained after adding a random noise signal to the original pure tone signal. The results show very good classification results (above 85 % hit rate) for each class when the Signal-to-noise ratio is above 3 decibels, validating the robustness of the network configuration and the training step.Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130

    Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach

    Get PDF
    Speech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.Ministerio de Economía y Competitividad TEC2016-77785-

    Embedded neural network for real-time animal behavior classification

    Get PDF
    Recent biological studies have focused on understanding animal interactions and welfare. To help biolo- gists to obtain animals’ behavior information, resources like wireless sensor networks are needed. More- over, large amounts of obtained data have to be processed off-line in order to classify different behaviors.There are recent research projects focused on designing monitoring systems capable of measuring someanimals’ parameters in order to recognize and monitor their gaits or behaviors. However, network unre- liability and high power consumption have limited their applicability.In this work, we present an animal behavior recognition, classification and monitoring system based ona wireless sensor network and a smart collar device, provided with inertial sensors and an embeddedmulti-layer perceptron-based feed-forward neural network, to classify the different gaits or behaviorsbased on the collected information. In similar works, classification mechanisms are implemented in aserver (or base station). The main novelty of this work is the full implementation of a reconfigurableneural network embedded into the animal’s collar, which allows a real-time behavior classification andenables its local storage in SD memory. Moreover, this approach reduces the amount of data transmittedto the base station (and its periodicity), achieving a significantly improving battery life. The system hasbeen simulated and tested in a real scenario for three different horse gaits, using different heuristics andsensors to improve the accuracy of behavior recognition, achieving a maximum of 81%.Junta de Andalucía P12-TIC-130

    Low-Power Embedded System for Gait Classification Using Neural Networks

    Get PDF
    Abnormal foot postures can be measured during the march by plantar pressures in both dynamic and static conditions. These detections may prevent possible injuries to the lower limbs like fractures, ankle sprain or plantar fasciitis. This information can be obtained by an embedded instrumented insole with pressure sensors and a low-power microcontroller. However, these sensors are placed in sparse locations inside the insole, so it is not easy to correlate manually its values with the gait type; that is why a machine learning system is needed. In this work, we analyse the feasibility of integrating a machine learning classifier inside a low-power embedded system in order to obtain information from the user’s gait in real-time and prevent future injuries. Moreover, we analyse the execution times, the power consumption and the model effectiveness. The machine learning classifier is trained using an acquired dataset of 3000+ steps from 6 different users. Results prove that this system provides an accuracy over 99% and the power consumption tests obtains a battery autonomy over 25 days
    corecore