464 research outputs found

    Using Low-Power, Low-Cost IoT Processors in Clinical Biosignal Research: An In-depth Feasibility Check

    Get PDF
    Research on biosignal (ExG) analysis is usually performed with expensive systems requiring connection with external computers for data processing. Consumer-grade low-cost wearable systems for bio-potential monitoring and embedded processing have been presented recently, but are not considered suitable for medical-grade analyses. This work presents a detailed quantitative comparative analysis of a recently presented fully-wearable low-power and low-cost platform (BioWolf) for ExG acquisition and embedded processing with two researchgrade acquisition systems, namely, ANTNeuro (EEG) and the Noraxon DTS (EMG). Our preliminary results demonstrate that BioWolf offers competitive performance in terms of electrical properties and classification accuracy. This paper also highlights distinctive features of BioWolf, such as real-time embedded processing, improved wearability, and energy-efficiency, which allows devising new types of experiments and usage scenarios for medical-grade biosignal processing in research and future clinical studies

    PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform

    Full text link
    Computing with high-dimensional (HD) vectors, also referred to as hypervectors\textit{hypervectors}, is a brain-inspired alternative to computing with scalars. Key properties of HD computing include a well-defined set of arithmetic operations on hypervectors, generality, scalability, robustness, fast learning, and ubiquitous parallel operations. HD computing is about manipulating and comparing large patterns-binary hypervectors with 10,000 dimensions-making its efficient realization on minimalistic ultra-low-power platforms challenging. This paper describes HD computing's acceleration and its optimization of memory accesses and operations on a silicon prototype of the PULPv3 4-core platform (1.5mm2^2, 2mW), surpassing the state-of-the-art classification accuracy (on average 92.4%) with simultaneous 3.7Ă—\times end-to-end speed-up and 2Ă—\times energy saving compared to its single-core execution. We further explore the scalability of our accelerator by increasing the number of inputs and classification window on a new generation of the PULP architecture featuring bit-manipulation instruction extensions and larger number of 8 cores. These together enable a near ideal speed-up of 18.4Ă—\times compared to the single-core PULPv3

    An Energy-Efficient IoT node for HMI applications based on an ultra-low power Multicore Processor

    Get PDF
    Developing wearable sensing technologies and unobtrusive devices is paving the way to the design of compelling applications for the next generation of systems for a smart IoT node for Human Machine Interaction (HMI). In this paper we present a smart sensor node for IoT and HMI based on a programmable Parallel Ultra-Low-Power (PULP) platform. We tested the system on a hand gesture recognition application, which is a preferred way of interaction in HMI design. A wearable armband with 8 EMG sensors is controlled by our IoT node, running a machine learning algorithm in real-time, recognizing up to 11 gestures with a power envelope of 11.84 mW. As a result, the proposed approach is capable to 35 hours of continuous operation and 1000 hours in standby. The resulting platform minimizes effectively the power required to run the software application and thus, it allows more power budget for high-quality AFE

    Advanced Interfaces for HMI in Hand Gesture Recognition

    Get PDF
    The present thesis investigates techniques and technologies for high quality Human Machine Interfaces (HMI) in biomedical applications. Starting from a literature review and considering market SoA in this field, the thesis explores advanced sensor interfaces, wearable computing and machine learning techniques for embedded resource-constrained systems. The research starts from the design and implementation of a real-time control system for a multifinger hand prosthesis based on pattern recognition algorithms. This system is capable to control an artificial hand using a natural gesture interface, considering the challenges related to the trade-off between responsiveness, accuracy and light computation. Furthermore, the thesis addresses the challenges related to the design of a scalable and versatile system for gesture recognition with the integration of a novel sensor interface for wearable medical and consumer application

    Hand Gesture Classification Using Emg Signal

    Get PDF
    The art of gesture recognition involves identification and classification of gestures. A gesture is any reproducible action or a sequence of actions. There are lots of techniques and algorithms to recognize gestures. In the project, gestures are recognized using biological signals generated by the human body. There are many biological signals that can be used for gesture recognition. Some of them are Electroencephalogram (EEG), Electrocardiogram (ECG), and Electromyogram (EMG). EMG signals are generally used because they have good signal strength (in the order of mV). Thus we use emg signal as the acquisition of EMG signals is easy and less complex ascompared to the above mentioned signals. Five different gestures such as Six features such as . root mean square, mean, standard deviation, variance, maximum and minimum values are extracted from the emg signals. The classifier used under the study is SVM , giving classification accuracy of 96.8%

    Integration of Augmented Reality and Neuromuscular Control Systems for Remote Vehicle Operations

    Get PDF
    Traditional remotely operated vehicles (ROV’s) require extensive setup and unnatural control systems. Integrating wearable devices as a control system, operators gain mobility and situational awareness to execute additional tasks. Analysis is conducted to understand if wearable devices connected by Internet of Things (IoT) allows for a more natural control system. A gesture recognition armband is worn around the operator’s forearm reading surface electromyography (sEMG) signals produced by their muscles to recognize hand gestures. An Augmented Reality (AR) headset overlays supplemental information on a heads-up display (HUD). IoT enables each component of the system to transmit and receive data over a network. The AR headset serves as the central processing unit, processing sEMG signals and transmitting respective commands to a ROV. The ROV acts on the received commands and transmits data, describing its actions and environment, to be displayed. A library of electrical signals that relate to hand gestures defined in US Army Publication TC3-21.60 are developed as a control set of commands. Signal processing and machine learning methods are implemented to reduce cross-talk and interference of weak sEMG signals for accurate gesture recognition. Results provide insight on the effectiveness of neuromuscular control compared to human-to-human instruction, and how wearable control systems can increase operator situational awareness

    Deep learning approach to control of prosthetic hands with electromyography signals

    Full text link
    Natural muscles provide mobility in response to nerve impulses. Electromyography (EMG) measures the electrical activity of muscles in response to a nerve's stimulation. In the past few decades, EMG signals have been used extensively in the identification of user intention to potentially control assistive devices such as smart wheelchairs, exoskeletons, and prosthetic devices. In the design of conventional assistive devices, developers optimize multiple subsystems independently. Feature extraction and feature description are essential subsystems of this approach. Therefore, researchers proposed various hand-crafted features to interpret EMG signals. However, the performance of conventional assistive devices is still unsatisfactory. In this paper, we propose a deep learning approach to control prosthetic hands with raw EMG signals. We use a novel deep convolutional neural network to eschew the feature-engineering step. Removing the feature extraction and feature description is an important step toward the paradigm of end-to-end optimization. Fine-tuning and personalization are additional advantages of our approach. The proposed approach is implemented in Python with TensorFlow deep learning library, and it runs in real-time in general-purpose graphics processing units of NVIDIA Jetson TX2 developer kit. Our results demonstrate the ability of our system to predict fingers position from raw EMG signals. We anticipate our EMG-based control system to be a starting point to design more sophisticated prosthetic hands. For example, a pressure measurement unit can be added to transfer the perception of the environment to the user. Furthermore, our system can be modified for other prosthetic devices.Comment: Conference. Houston, Texas, USA. September, 201

    Raspberry Pi based Modular System for Multichannel Event-Driven Functional Electrical Stimulation Control

    Get PDF
    This paper describes the implementation and testing of a modular software for multichannel control of Functional Electrical Stimulation (FES). Moving towards an embedded scenario, the core of the system is a Raspberry Pi, whose different models (with different computing powers) best suit two different system use-cases: user-supervised and stand-alone. Given the need for real-time and reliable FES applications, software processing timings were analyzed for multiple configurations, along with hardware resources utilization. Among the results, the simultaneous use of eight channels has been functionally achieved (0% lost packets) while minimizing system timing failures (excessive processing latency). Further investigations included stressing the system using more constraining acquisition parameters, eventually limiting the usable channels (only for the stand-alone use-case)

    A Wearable Ultra-Low-Power sEMG-Triggered Ultrasound System for Long-Term Muscle Activity Monitoring

    Full text link
    Surface electromyography (sEMG) is a well-established approach to monitor muscular activity on wearable and resource-constrained devices. However, when measuring deeper muscles, its low signal-to-noise ratio (SNR), high signal attenuation, and crosstalk degrade sensing performance. Ultrasound (US) complements sEMG effectively with its higher SNR at high penetration depths. In fact, combining US and sEMG improves the accuracy of muscle dynamic assessment, compared to using only one modality. However, the power envelope of US hardware is considerably higher than that of sEMG, thus inflating energy consumption and reducing the battery life. This work proposes a wearable solution that integrates both modalities and utilizes an EMG-driven wake-up approach to achieve ultra-low power consumption as needed for wearable long-term monitoring. We integrate two wearable state-of-the-art (SoA) US and ExG biosignal acquisition devices to acquire time-synchronized measurements of the short head of the biceps. To minimize power consumption, the US probe is kept in a sleep state when there is no muscle activity. sEMG data are processed on the probe (filtering, envelope extraction and thresholding) to identify muscle activity and generate a trigger to wake-up the US counterpart. The US acquisition starts before muscle fascicles displacement thanks to a triggering time faster than the electromechanical delay (30-100 ms) between the neuromuscular junction stimulation and the muscle contraction. Assuming a muscle contraction of 200 ms at a contraction rate of 1 Hz, the proposed approach enables more than 59% energy saving (with a full-system average power consumption of 12.2 mW) as compared to operating both sEMG and US continuously.Comment: 4 pages, 5 figures, 1 table, 2023 IEEE International Ultrasonics Symposiu
    • …
    corecore