681 research outputs found

    An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier

    Full text link
    EMG-based gesture recognition shows promise for human-machine interaction. Systems are often afflicted by signal and electrode variability which degrades performance over time. We present an end-to-end system combating this variability using a large-area, high-density sensor array and a robust classification algorithm. EMG electrodes are fabricated on a flexible substrate and interfaced to a custom wireless device for 64-channel signal acquisition and streaming. We use brain-inspired high-dimensional (HD) computing for processing EMG features in one-shot learning. The HD algorithm is tolerant to noise and electrode misplacement and can quickly learn from few gestures without gradient descent or back-propagation. We achieve an average classification accuracy of 96.64% for five gestures, with only 7% degradation when training and testing across different days. Our system maintains this accuracy when trained with only three trials of gestures; it also demonstrates comparable accuracy with the state-of-the-art when trained with one trial

    PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform

    Full text link
    Computing with high-dimensional (HD) vectors, also referred to as hypervectors\textit{hypervectors}, is a brain-inspired alternative to computing with scalars. Key properties of HD computing include a well-defined set of arithmetic operations on hypervectors, generality, scalability, robustness, fast learning, and ubiquitous parallel operations. HD computing is about manipulating and comparing large patterns-binary hypervectors with 10,000 dimensions-making its efficient realization on minimalistic ultra-low-power platforms challenging. This paper describes HD computing's acceleration and its optimization of memory accesses and operations on a silicon prototype of the PULPv3 4-core platform (1.5mm2^2, 2mW), surpassing the state-of-the-art classification accuracy (on average 92.4%) with simultaneous 3.7×\times end-to-end speed-up and 2×\times energy saving compared to its single-core execution. We further explore the scalability of our accelerator by increasing the number of inputs and classification window on a new generation of the PULP architecture featuring bit-manipulation instruction extensions and larger number of 8 cores. These together enable a near ideal speed-up of 18.4×\times compared to the single-core PULPv3

    An Investigation of Single-Core and Multi-Core Computing Methods for Biosignal Processing

    Get PDF
    This paper provides a single-core and multi-core processor design for applications involving highly parallel processing and sluggish biosignal events in health surveillance systems. An instruction memory (IM), a data memory (DM), and a processor core (PC) make up the single-core design. In contrast, the multi-core architecture is made up of PCs, separate IMs for each core, a shared DM, and an interconnection cross-bar connecting  the cores and the DM. The power vs. performance compromises for a multi-lead ECG signal conditioning application that takes advantage of near threshold computing are evaluated between both designs. According to the findings, the multi-core system uses 10.4% more power for low processing demands (681 kOps/s) and 66% less power for high processing needs (50.1 MOps/s).   &nbsp

    Embedding Temporal Convolutional Networks for Energy-efficient PPG-based Heart Rate Monitoring

    Get PDF
    Photoplethysmography (PPG) sensors allow for non-invasive and comfortable heart rate (HR) monitoring, suitable for compact wrist-worn devices. Unfortunately, motion artifacts (MAs) severely impact the monitoring accuracy, causing high variability in the skin-to-sensor interface. Several data fusion techniques have been introduced to cope with this problem, based on combining PPG signals with inertial sensor data. Until now, both commercial and reasearch solutions are computationally efficient but not very robust, or strongly dependent on hand-tuned parameters, which leads to poor generalization performance. In this work, we tackle these limitations by proposing a computationally lightweight yet robust deep learning-based approach for PPG-based HR estimation. Specifically, we derive a diverse set of Temporal Convolutional Networks for HR estimation, leveraging Neural Architecture Search. Moreover, we also introduce ActPPG, an adaptive algorithm that selects among multiple HR estimators depending on the amount of MAs, to improve energy efficiency. We validate our approaches on two benchmark datasets, achieving as low as 3.84 beats per minute of Mean Absolute Error on PPG-Dalia, which outperforms the previous state of the art. Moreover, we deploy our models on a low-power commercial microcontroller (STM32L4), obtaining a rich set of Pareto optimal solutions in the complexity vs. accuracy space

    First observation of quantum interference in the process phi -> KS KL ->pi+pi-pi+pi-: a test of quantum mechanics and CPT symmetry

    Get PDF
    We present the first observation of quantum interference in the process phi -> KS KL ->pi+pi-pi+pi-. This analysis is based on data collected with the KLOE detector at the e^+e^- collider DAFNE in 2001--2002 for an integrated luminosity of about 380pb^-1. Fits to the distribution of Delta t, the difference between the two kaon decay times, allow tests of the validity of quantum mechanics and CPT symmetry. No deviations from the expectations of quantum mechanics and CPT symmetry have been observed. New or improved limits on various decoherence and CPT violation parameters have been obtainedComment: submitted to Physics Letter B one number changed old:gamma=(1.1+2.9-2.4)10^-21 GeV new:(1.3+2.8-2.4)10^-21GeV corrected typo

    In Risaia Della Marchesa Colombi e La Poetica Della Bellezza

    Get PDF
    One of the author’s best works, In risaia (1878), illustrates a crucial phase in Marchesa Colombi’s poetics of ‘beauty’ which aims at reaching an harmonic correspondence between the protagonist’s inner and outer worlds. The hard work in the ricefields, which results in Nanna’s baldness and loss of beauty, and the cruel encounter with love have such a strong impact on her that she withdraws from reality into an imaginative sphere of her own, thus losing her grip on reality and becoming envious and malicious. It is only by accepting her new look and painful experience and by being aware of her egotism that Nanna finally overcomes her self-pitying attitude and the false role that she had imposed upon herself. Having conquered her own negativity, the protagonist is also able to see the goodness in reality, thus achieving a degree of harmony that is unknown to other characters in Marchesa Colombi’s works.ISSA vol 13 (2) 200

    Compressed sensing based seizure detection for an ultra low power multi-core architecture

    Get PDF
    Extracting information from brain signals in advanced Brain Machine Interfaces (BMI) often requires computationally demanding processing. The complexity of the algorithms traditionally employed to process multi-channel neural data, such as Principal Component Analysis (PCA), dramatically increases while scaling-up the number of channels and requires more power-hungry computational platforms. This could hinder the development of low-cost and low-power interfaces which can be used in wearable or implantable real-Time systems. This work proposes a new algorithm for the detection of epileptic seizure based on compressively sensed EEG information, and its optimization on a low-power multi-core SoC for near-sensor data analytics: Mr. Wolf. With respect to traditional algorithms based on PCA, the proposed approach reduces the computational complexity by 4.4x in ARM Cortex M4-based MCU. Implementing this algorithm on Mr.Wolf platform allows to detect a seizure with 1 ms of latency after acquiring the EEG data for 1 s, within an energy budget of 18.4 μJ. A comparison with the same algorithm on a commercial MCU shows an improvement of 6.9x in performance and up to 18.4x in terms of energy efficiency

    Efficient Personalized Learning for Wearable Health Applications using HyperDimensional Computing

    Full text link
    Health monitoring applications increasingly rely on machine learning techniques to learn end-user physiological and behavioral patterns in everyday settings. Considering the significant role of wearable devices in monitoring human body parameters, on-device learning can be utilized to build personalized models for behavioral and physiological patterns, and provide data privacy for users at the same time. However, resource constraints on most of these wearable devices prevent the ability to perform online learning on them. To address this issue, it is required to rethink the machine learning models from the algorithmic perspective to be suitable to run on wearable devices. Hyperdimensional computing (HDC) offers a well-suited on-device learning solution for resource-constrained devices and provides support for privacy-preserving personalization. Our HDC-based method offers flexibility, high efficiency, resilience, and performance while enabling on-device personalization and privacy protection. We evaluate the efficacy of our approach using three case studies and show that our system improves the energy efficiency of training by up to 45.8×45.8\times compared with the state-of-the-art Deep Neural Network (DNN) algorithms while offering a comparable accuracy

    Optimized Biosignals Processing Algorithms for New Designs of Human Machine Interfaces on Parallel Ultra-Low Power Architectures

    Get PDF
    The aim of this dissertation is to explore Human Machine Interfaces (HMIs) in a variety of biomedical scenarios. The research addresses typical challenges in wearable and implantable devices for diagnostic, monitoring, and prosthetic purposes, suggesting a methodology for tailoring such applications to cutting edge embedded architectures. The main challenge is the enhancement of high-level applications, also introducing Machine Learning (ML) algorithms, using parallel programming and specialized hardware to improve the performance. The majority of these algorithms are computationally intensive, posing significant challenges for the deployment on embedded devices, which have several limitations in term of memory size, maximum operative frequency, and battery duration. The proposed solutions take advantage of a Parallel Ultra-Low Power (PULP) architecture, enhancing the elaboration on specific target architectures, heavily optimizing the execution, exploiting software and hardware resources. The thesis starts by describing a methodology that can be considered a guideline to efficiently implement algorithms on embedded architectures. This is followed by several case studies in the biomedical field, starting with the analysis of a Hand Gesture Recognition, based on the Hyperdimensional Computing algorithm, which allows performing a fast on-chip re-training, and a comparison with the state-of-the-art Support Vector Machine (SVM); then a Brain Machine Interface (BCI) to detect the respond of the brain to a visual stimulus follows in the manuscript. Furthermore, a seizure detection application is also presented, exploring different solutions for the dimensionality reduction of the input signals. The last part is dedicated to an exploration of typical modules for the development of optimized ECG-based applications

    An Energy-Efficient IoT node for HMI applications based on an ultra-low power Multicore Processor

    Get PDF
    Developing wearable sensing technologies and unobtrusive devices is paving the way to the design of compelling applications for the next generation of systems for a smart IoT node for Human Machine Interaction (HMI). In this paper we present a smart sensor node for IoT and HMI based on a programmable Parallel Ultra-Low-Power (PULP) platform. We tested the system on a hand gesture recognition application, which is a preferred way of interaction in HMI design. A wearable armband with 8 EMG sensors is controlled by our IoT node, running a machine learning algorithm in real-time, recognizing up to 11 gestures with a power envelope of 11.84 mW. As a result, the proposed approach is capable to 35 hours of continuous operation and 1000 hours in standby. The resulting platform minimizes effectively the power required to run the software application and thus, it allows more power budget for high-quality AFE
    • …
    corecore