950 research outputs found

    Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors

    Get PDF
    Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, PhysioNet/CinC Challenge 2016 score: 0.9416). This tool could aid cardiologists and primary care physicians in the auscultation process, improving the decision making task and reducing type-I and type-II errors.Ministerio de Economía y Competitividad TEC2016-77785-

    NAVIS: Neuromorphic Auditory VISualizer Tool

    Get PDF
    This software presents diverse utilities to perform the first post-processing layer taking the neuromorphic auditory sensors (NAS) information. The used NAS implements in FPGA a cascade filters architecture, imitating the behavior of the basilar membrane and inner hair cells and working with the sound information decomposed into its frequency components as spike streams. The well-known neuromorphic hardware interface Address-Event-Representation (AER) is used to propagate auditory information out of the NAS, emulating the auditory vestibular nerve. Using the information packetized into aedat files, which are generated through the jAER software plus an AER to USB computer interface, NAVIS implements a set of graphs that allows to represent the auditory information as cochleograms, histograms, sonograms, etc. It can also split the auditory information into different sets depending on the activity level of the spike streams. The main contribution of this software tool is that it allows complex audio post-processing treatments and representations, which is a novelty for spike-based systems in the neuromorphic community and it will help neuromorphic engineers to build sets for training spiking neural networks (SNN).Ministerio de Economía y Competitividad TEC2012-37868-C04-0

    Stereo Matching in Address-Event-Representation (AER) Bio-Inspired Binocular Systems in a Field-Programmable Gate Array (FPGA)

    Get PDF
    In stereo-vision processing, the image-matching step is essential for results, although it involves a very high computational cost. Moreover, the more information is processed, the more time is spent by the matching algorithm, and the more ine cient it is. Spike-based processing is a relatively new approach that implements processing methods by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system can solve much more complex problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy for visual information processing based on the neuro-inspired address-event-representation (AER) is currently achieving very high performance. The aim of this work was to study the viability of a matching mechanism in stereo-vision systems, using AER codification and its implementation in a field-programmable gate array (FPGA). Some studies have been done before in an AER system with monitored data using a computer; however, this kind of mechanism has not been implemented directly on hardware. To this end, an epipolar geometry basis applied to AER systems was studied and implemented, with other restrictions, in order to achieve good results in a real-time scenario. The results and conclusions are shown, and the viability of its implementation is proven.Ministerio de Economía y Competitividad TEC2016-77785-

    System based on inertial sensors for behavioral monitoring of wildlife

    Get PDF
    Sensors Network is an integration of multiples sensors in a system to collect information about different environment variables. Monitoring systems allow us to determine the current state, to know its behavior and sometimes to predict what it is going to happen. This work presents a monitoring system for semi-wild animals that get their actions using an IMU (inertial measure unit) and a sensor fusion algorithm. Based on an ARM-CortexM4 microcontroller this system sends data using ZigBee technology of different sensor axis in two different operations modes: RAW (logging all information into a SD card) or RT (real-time operation). The sensor fusion algorithm improves both the precision and noise interferences.Junta de Andalucía P12-TIC-130

    A Sensor Fusion Horse Gait Classification by a Spiking Neural Network on SpiNNaker

    Get PDF
    The study and monitoring of the behavior of wildlife has always been a subject of great interest. Although many systems can track animal positions using GPS systems, the behavior classification is not a common task. For this work, a multi-sensory wearable device has been designed and implemented to be used in the Doñana National Park in order to control and monitor wild and semiwild life animals. The data obtained with these sensors is processed using a Spiking Neural Network (SNN), with Address-Event-Representation (AER) coding, and it is classified between some fixed activity behaviors. This works presents the full infrastructure deployed in Doñana to collect the data, the wearable device, the SNN implementation in SpiNNaker and the classification results.Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130

    Multilayer Spiking Neural Network for Audio Samples Classification Using SpiNNaker

    Get PDF
    Audio classification has always been an interesting subject of research inside the neuromorphic engineering field. Tools like Nengo or Brian, and hardware platforms like the SpiNNaker board are rapidly increasing in popularity in the neuromorphic community due to the ease of modelling spiking neural networks with them. In this manuscript a multilayer spiking neural network for audio samples classification using SpiNNaker is presented. The network consists of different leaky integrate-and-fire neuron layers. The connections between them are trained using novel firing rate based algorithms and tested using sets of pure tones with frequencies that range from 130.813 to 1396.91 Hz. The hit rate percentage values are obtained after adding a random noise signal to the original pure tone signal. The results show very good classification results (above 85 % hit rate) for each class when the Signal-to-noise ratio is above 3 decibels, validating the robustness of the network configuration and the training step.Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130

    Machine-learning-aided warm-start of constraint generation methods for online mixed-integer optimization

    Full text link
    Mixed Integer Linear Programs (MILP) are well known to be NP-hard problems in general. Even though pure optimization-based methods, such as constraint generation, are guaranteed to provide an optimal solution if enough time is given, their use in online applications is still a great challenge due to their usual excessive time requirements. To alleviate their computational burden, some machine learning techniques have been proposed in the literature, using the information provided by previously solved MILP instances. Unfortunately, these techniques report a non-negligible percentage of infeasible or suboptimal instances. By linking mathematical optimization and machine learning, this paper proposes a novel approach that speeds up the traditional constraint generation method, preserving feasibility and optimality guarantees. In particular, we first identify offline the so-called invariant constraint set of past MILP instances. We then train (also offline) a machine learning method to learn an invariant constraint set as a function of the problem parameters of each instance. Next, we predict online an invariant constraint set of the new unseen MILP application and use it to initialize the constraint generation method. This warm-started strategy significantly reduces the number of iterations to reach optimality, and therefore, the computational burden to solve online each MILP problem is significantly reduced. Very importantly, the proposed methodology inherits the feasibility and optimality guarantees of the traditional constraint generation method. The computational performance of the proposed approach is quantified through synthetic and real-life MILP applications

    Warm-starting constraint generation for mixed-integer optimization: A Machine Learning approach

    Get PDF
    Mixed Integer Linear Programs (MILP) are well known to be NP-hard (Non-deterministic Polynomial-time hard) problems in general. Even though pure optimization-based methods, such as constraint generation, are guaranteed to provide an optimal solution if enough time is given, their use in online applications remains a great challenge due to their usual excessive time requirements. To alleviate their computational burden, some machine learning techniques (ML) have been proposed in the literature, using the information provided by previously solved MILP instances. Unfortunately, these techniques report a non-negligible percentage of infeasible or suboptimal instances. By linking mathematical optimization and machine learning, this paper proposes a novel approach that speeds up the traditional constraint generation method, preserving feasibility and optimality guarantees. In particular, we first identify offline the so-called invariant constraint set of past MILP instances. We then train (also offline) a machine learning method to learn an invariant constraint set as a function of the problem parameters of each instance. Next, we predict online an invariant constraint set of the new unseen MILP application and use it to initialize the constraint generation method. This warm-started strategy significantly reduces the number of iterations to reach optimality, and therefore, the computational burden to solve online each MILP problem is significantly reduced. Very importantly, all the feasibility and optimality theoretical guarantees of the traditional constraint generation method are inherited by our proposed methodology. The computational performance of the proposed approach is quantified through synthetic and real-life MILP applications.This work was supported in part by the Spanish Ministry of Science and Innovation through project PID2020-115460GB-I00, in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705) in part, by the Junta de Andalucía (JA), the Universidad de Málaga (UMA), and the European Regional Development Fund (FEDER) through the research projects P20_00153 and UMA2018-FEDERJA-001, and in part by the Research Program for Young Talented Researchers of the University of Málaga under Project B1-2020-15. The authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Málaga. Funding for open access charge: Universidad de Málaga / CBUA

    Event-based Row-by-Row Multi-convolution engine for Dynamic-Vision Feature Extraction on FPGA

    Get PDF
    Neural networks algorithms are commonly used to recognize patterns from different data sources such as audio or vision. In image recognition, Convolutional Neural Networks are one of the most effective techniques due to the high accuracy they achieve. This kind of algorithms require billions of addition and multiplication operations over all pixels of an image. However, it is possible to reduce the number of operations using other computer vision techniques rather than frame-based ones, e.g. neuromorphic frame-free techniques. There exists many neuromorphic vision sensors that detect pixels that have changed their luminosity. In this study, an event-based convolution engine for FPGA is presented. This engine models an array of leaky integrate and fire neurons. It is able to apply different kernel sizes, from 1x1 to 7x7, which are computed row by row, with a maximum number of 64 different convolution kernels. The design presented is able to process 64 feature maps of 7x7 with a latency of 8.98 s.Ministerio de Economía y Competitividad TEC2016-77785-
    corecore