614 research outputs found

    Real-time EMG based pattern recognition control for hand prostheses : a review on existing methods, challenges and future implementation

    Get PDF
    Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Towards electrodeless EMG linear envelope signal recording for myo-activated prostheses control

    Get PDF
    After amputation, the residual muscles of the limb may function in a normal way, enabling the electromyogram (EMG) signals recorded from them to be used to drive a replacement limb. These replacement limbs are called myoelectric prosthesis. The prostheses that use EMG have always been the first choice for both clinicians and engineers. Unfortunately, due to the many drawbacks of EMG (e.g. skin preparation, electromagnetic interferences, high sample rate, etc.); researchers have aspired to find suitable alternatives. One proposes the dry-contact, low-cost sensor based on a force-sensitive resistor (FSR) as a valid alternative which instead of detecting electrical events, detects mechanical events of muscle. FSR sensor is placed on the skin through a hard, circular base to sense the muscle contraction and to acquire the signal. Similarly, to reduce the output drift (resistance) caused by FSR edges (creep) and to maintain the FSR sensitivity over a wide input force range, signal conditioning (Voltage output proportional to force) is implemented. This FSR signal acquired using FSR sensor can be used directly to replace the EMG linear envelope (an important control signal in prosthetics applications). To find the best FSR position(s) to replace a single EMG lead, the simultaneous recording of EMG and FSR output is performed. Three FSRs are placed directly over the EMG electrodes, in the middle of the targeted muscle and then the individual (FSR1, FSR2 and FSR3) and combination of FSR (e.g. FSR1+FSR2, FSR2-FSR3) is evaluated. The experiment is performed on a small sample of five volunteer subjects. The result shows a high correlation (up to 0.94) between FSR output and EMG linear envelope. Consequently, the usage of the best FSR sensor position shows the ability of electrode less FSR-LE to proportionally control the prosthesis (3-D claw). Furthermore, FSR can be used to develop a universal programmable muscle signal sensor that can be suitable to control the myo-activated prosthesis

    The Feasibility of Wearable Sensors for the Automation of Distal Upper Extremity Ergonomic Assessment Tools

    Get PDF
    Work-related distal upper limb musculoskeletal disorders are costly conditions that many companies and researchers spend significant resources on preventing. Ergonomic assessments evaluate the risk of developing a work-related musculoskeletal disorder (WMSD) by quantifying variables such as the force, repetition, and posture (among others) that the task requires. Accurate and objective measurements of force and posture are challenging due to equipment and location constraints. Wearable sensors like the Delsys Trigno Quattro combine inertial measurement units (IMUs) and surface electromyography to solve collection difficulties. The purpose of this work was to evaluate the joint angle estimation of IMUs and the relationship between sEMG and overall task intensity throughout a controlled wrist motion. Using a 3 degrees-of-freedom wrist manipulandum, the feasibility of a small, lightweight wearable was evaluated to collect accurate wrist flexion and extension angles and to use sEMG to quantify task intensity. The task was a repeated 95º arc in flexion/ extension with six combinations of wrist torques and grip requirements. The mean wrist angle difference (throughout the range of motion) between the WristBot and the IMU of 1.70° was not significant (p= 0.057); but significant differences existed throughout the range of motion. The largest difference between the IMU and the WristBot was 10.7° at 40° extension; this discrepancy is smaller than typical visual inspection joint angle estimate errors by ergonomists of 15.6°. All sEMG metrics (flexor muscle root mean square (RMS), extensor muscle RMS, mean RMS, integrated sEMG (iEMG), physiological cross-sectional area weighted RMS) and ratings of perceived exertion (RPE) had significant regression results with the task intensity. Variance in RPE was better explained by task intensity than the best sEMG metric (iEMG) with R2 values of 0.35 and 0.21, respectively. Wearable sensors can be used in occupational settings to increase the accuracy of postural assessments; additional research is required on relationships between sEMG and task intensity to be used effectively in ergonomics. There is potential for sEMG to be a powerful tool; however, the dynamic nature and combined exertion (grip and flexion/ extension) make it difficult to quantify task intensit

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfügbare und leicht zugängliche Fläche für Interaktion. Jüngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer Geräte geführt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten Epidermisgeräte haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dünn, oft dünner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles Verständnis von Epidermisgeräten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen Ansätzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinäre Analyse von Epidermisgeräten die Designziele und Herausforderungen, die angegangen werden müssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale Geräte unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des Geräts und den taktilen Empfindlichkeitsschwellen sowie der Fähigkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen für die Realisierung epidermaler Geräte ab. Zweitens trägt diese Thesis zu neuartigen Epidermisgeräten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trägt zur Herstellung und zum Design neuartiger Epidermisgeräte bei, die hochgradig an die Haut angepasst sind und Berührungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese Geräte können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum für die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir präsentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und Größen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestätigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-Verhältnis erreicht und eine hohe räumliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da Epidermisgeräte in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestützten Design von multimodalen Epidermisgeräten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten Geräte können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale Aktivität) messen. Darüber hinaus stellen wir eine computergestützte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte Gerätedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der Fähigkeit zur Erfassung physiologischer Signale und der Größe des Geräts. Das grafische Tool ermöglicht die einfache Festlegung von Designpräferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinären Perspektive einen Fahrplan für zukünftige Forschung in diesem Bereich, indem wir die nächsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trägt diese Arbeit zu einem ganzheitlichen Verständnis von Epidermisgeräten bei: Sie liefert ein empirisches und konzeptionelles Verständnis sowie technische Einblicke durch Beiträge zu DIY (Do-It-Yourself), schneller Fertigung und computergestützten Entwurfstechniken

    Hand Gestures Recognition for Human-Machine Interfaces: A Low-Power Bio-Inspired Armband

    Get PDF
    Hand gesture recognition has recently increased its popularity as Human-Machine Interface (HMI) in the biomedical field. Indeed, it can be performed involving many different non-invasive techniques, e.g., surface ElectroMyoGraphy (sEMG) or PhotoPlethysmoGraphy (PPG). In the last few years, the interest demonstrated by both academia and industry brought to a continuous spawning of commercial and custom wearable devices, which tried to address different challenges in many application fields, from tele-rehabilitation to sign language recognition. In this work, we propose a novel 7-channel sEMG armband, which can be employed as HMI for both serious gaming control and rehabilitation support. In particular, we designed the prototype focusing on the capability of our device to compute the Average Threshold Crossing (ATC) parameter, which is evaluated by counting how many times the sEMG signal crosses a threshold during a fixed time duration (i.e., 130 ms), directly on the wearable device. Exploiting the event-driven characteristic of the ATC, our armband is able to accomplish the on-board prediction of common hand gestures requiring less power w.r.t. state of the art devices. At the end of an acquisition campaign that involved the participation of 26 people, we obtained an average classifier accuracy of 91.9% when aiming to recognize in real time 8 active hand gestures plus the idle state. Furthermore, with 2.92mA of current absorption during active functioning and 1.34mA prediction latency, this prototype confirmed our expectations and can be an appealing solution for long-term (up to 60 h) medical and consumer applications

    A non-invasive human-machine interfacing framework for investigating dexterous control of hand muscles

    Get PDF
    The recent fast development of virtual reality and robotic assistive devices enables to augment the capabilities of able-body individuals as well as to overcome the motor missing functions of neurologically impaired or amputee individuals. To control these devices, movement intentions can be captured from biological structures involved in the process of motor planning and execution, such as the central nervous system (CNS), the peripheral nervous system (in particular the spinal motor neurons) and the musculoskeletal system. Thus, human-machine interfaces (HMI) enable to transfer neural information from the neuro-muscular system to machines. To prevent any risks due to surgical operations or tissue damage in implementing these HMIs, a non-invasive approach is proposed in this thesis. In the last five decades, surface electromyography (sEMG) has been extensively explored as a non-invasive source of neural information. EMG signals are constituted by the mixed electrical activity of several recruited motor units, the fundamental components of muscle contraction. High-density sEMG (HD-sEMG) with the use of blind source separation methods enabled to identify the discharge patterns of many of these active motor units. From these decomposed discharge patterns, the net common synaptic input (CSI) to the corresponding spinal motor neurons was quantified with cross-correlation in the time and frequency domain or with principal component analysis (PCA) on one or few muscles. It has been hypothesised that this CSI would result from the contribution of spinal descending commands sent by supra-spinal structures and afferences integrated by spinal interneurons. Another motor strategy implying the integration of descending commands at the spinal level is the one regarding the coordination of many muscles to control a large number of articular joints. This neurophysiological mechanism was investigated by measuring a single EMG amplitude per muscle, thus without the use of HD-sEMG and decomposition. In this case, the aim was to understand how the central nervous system (CNS) could control a large set of muscles actuating a vast set of combinations of degrees of freedom in a modular way. Thus, time-invariant patterns of muscle coordination, i.e. muscle synergies , were found in animals and humans from EMG amplitude of many muscles, modulated by time-varying commands to be combined to fulfil complex movements. In this thesis, for the first time, we present a non-invasive framework for human-machine interfaces based on both spinal motor neuron recruitment strategy and muscle synergistic control for unifying the understanding of these two motor control strategies and producing control signals correlated to biomechanical quantities. This implies recording both from many muscles and using HD-sEMG for each muscle. We investigated 14 muscles of the hand, 6 extrinsic and 8 intrinsic. The first two studies, (in Chapters 2 and 3, respectively) present the framework for CSI quantification by PCA and the extraction of the synergistic organisation of spinal motor neurons innervating the 14 investigated muscles. For the latter analysis, in Chapter 3, we proposed the existence of what we named as motor neuron synergies extracted with non-negative matrix factorisation (NMF) from the identified motor neurons. In these first two studies, we considered 7 subjects and 7 grip types involving differently all the four fingers in opposition with the thumb. In the first study, we found that the variance explained by the CSI among all motor neuron spike trains was (53.0 ± 10.9) % and its cross-correlation with force was 0.67 ± 0.10, remarkably high with respect to previous findings. In the second study, 4 motor neuron synergies were identified and associated with the actuation of one finger in opposition with the thumb, finding even higher correlation values with force (over 0.8) for each synergy associated with a finger during the actuation of the relative finger. In Chapter 4, we then extended the set of analysed movements in a vast repertoire of gestures and repeated the analysis of Chapter 3 by finding a different synergistic organisation during the execution of tens of tasks. We divided the contribution among extrinsic and intrinsic muscles and we found that intrinsic better enable single-finger spatial discrimination, while no difference was found in regression of joint angles by dividing the two groups of muscles. Finally, in Chapter 5 we proposed the techniques of the previous chapters for cases of impairment due both to amputation and stroke. We analysed one case of pre and post rehabilitation sessions of a trans-humeral amputee, the case of a post-stroke trans-radial amputee and three cases of acute stroke, i.e. less than one month from the stroke event. We present future perspectives (Chapter 6) aimed to design and implement a platform for both rehabilitation monitoring and myoelectric control. Thus, this thesis provides a bridge between two extensively studied motor control mechanisms, i.e. motor neuron recruitment and muscle synergies, and proposes this framework as suitable for rehabilitation monitoring and control of assistive devices.Open Acces

    Sensitive and Makeable Computational Materials for the Creation of Smart Everyday Objects

    Get PDF
    The vision of computational materials is to create smart everyday objects using the materi- als that have sensing and computational capabilities embedded into them. However, today’s development of computational materials is limited because its interfaces (i.e. sensors) are unable to support wide ranges of human interactions , and withstand the fabrication meth- ods of everyday objects (e.g. cutting and assembling). These barriers hinder citizens from creating smart every day objects using computational materials on a large scale. To overcome the barriers, this dissertation presents the approaches to develop compu- tational materials to be 1) sensitive to a wide variety of user interactions, including explicit interactions (e.g. user inputs) and implicit interactions (e.g. user contexts), and 2) makeable against a wide range of fabrication operations, such cutting and assembling. I exemplify the approaches through five research projects on two common materials, textile and wood. For each project, I explore how a material interface can be made to sense user inputs or activities, and how it can be optimized to balance sensitivity and fabrication complexity. I discuss the sensing algorithms and machine learning model to interpret the sensor data as high-level abstraction and interaction. I show the practical applications of developed computational materials. I demonstrate the evaluation study to validate their performance and robustness. In the end of this dissertation, I summarize the contributions of my thesis and discuss future directions for the vision of computational materials
    • …
    corecore