89 research outputs found

    Man-Machine Interface System for Neuromuscular Training and Evaluation Based on EMG and MMG Signals

    Get PDF
    This paper presents the UVa-NTS (University of Valladolid Neuromuscular Training System), a multifunction and portable Neuromuscular Training System. The UVa-NTS is designed to analyze the voluntary control of severe neuromotor handicapped patients, their interactive response, and their adaptation to neuromuscular interface systems, such as neural prostheses or domotic applications. Thus, it is an excellent tool to evaluate the residual muscle capabilities in the handicapped. The UVa-NTS is composed of a custom signal conditioning front-end and a computer. The front-end electronics is described thoroughly as well as the overall features of the custom software implementation. The software system is composed of a set of graphical training tools and a processing core. The UVa-NTS works with two classes of neuromuscular signals: the classic myoelectric signals (MES) and, as a novelty, the myomechanic signals (MMS). In order to evaluate the performance of the processing core, a complete analysis has been done to classify its efficiency and to check that it fulfils with the real-time constraints. Tests were performed both with healthy and selected impaired subjects. The adaptation was achieved rapidly, applying a predefined protocol for the UVa-NTS set of training tools. Fine voluntary control was demonstrated to be reached with the myoelectric signals. And the UVa-NTS demonstrated to provide a satisfactory voluntary control when applying the myomechanic signals

    Mechanomyographic Parameter Extraction Methods: An Appraisal for Clinical Applications

    Get PDF
    The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG) parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity

    A Review of Non-Invasive Techniques to Detect and Predict Localised Muscle Fatigue

    Get PDF
    Muscle fatigue is an established area of research and various types of muscle fatigue have been investigated in order to fully understand the condition. This paper gives an overview of the various non-invasive techniques available for use in automated fatigue detection, such as mechanomyography, electromyography, near-infrared spectroscopy and ultrasound for both isometric and non-isometric contractions. Various signal analysis methods are compared by illustrating their applicability in real-time settings. This paper will be of interest to researchers who wish to select the most appropriate methodology for research on muscle fatigue detection or prediction, or for the development of devices that can be used in, e.g., sports scenarios to improve performance or prevent injury. To date, research on localised muscle fatigue focuses mainly on the clinical side. There is very little research carried out on the implementation of detecting/predicting fatigue using an autonomous system, although recent research on automating the process of localised muscle fatigue detection/prediction shows promising results

    Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition

    Full text link
    Conventional electromyography (EMG) measures the continuous neural activity during muscle contraction, but lacks explicit quantification of the actual contraction. Mechanomyography (MMG) and accelerometers only measure body surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle actuation sensing that can be wearable and touchless, capturing both superficial and deep muscle groups. We verified RMG experimentally by a forearm wearable sensor for detailed hand gesture recognition. We first converted the radio sensing outputs to the time-frequency spectrogram, and then employed the vision transformer (ViT) deep learning network as the classification model, which can recognize 23 gestures with an average accuracy up to 99% on 8 subjects. By transfer learning, high adaptivity to user difference and sensor variation were achieved at an average accuracy up to 97%. We further demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for eye movement and body postures tracking. RMG can be used with synchronous EMG to derive stimulation-actuation waveforms for many future applications in kinesiology, physiotherapy, rehabilitation, and human-machine interface

    Inter- and Intra-Individual Differences in EMG and MMG during Maximal, Bilateral, Dynamic Leg Extensions

    Get PDF
    The purpose of this study was to compare the composite, inter-individual, and intra-individual differences in the patterns of responses for electromyographic (EMG) and mechanomyographic (MMG) amplitude (AMP) and mean power frequency (MPF) during fatiguing, maximal, bilateral, and isokinetic leg extension muscle actions. Thirteen recreationally active men (age = 21.7 2.6 years; body mass = 79.8 11.5 kg; height = 174.2 12.7 cm) performed maximal, bilateral leg extensions at 1801 until the torque values dropped to 50% of peak torque for two consecutive repetitions. The EMG and MMG signals from the vastus lateralis (VL) muscles of both limbs were recorded. Four 2(Leg) 19(time) repeated measures ANOVAs were conducted to examine mean differences for EMG AMP, EMG MPF, MMG AMP, and MMG MPF between limbs, and polynomial regression analyses were performed to identify the patterns of neuromuscular responses. The results indicated no significant differences between limbs for EMG AMP (p = 0.44), EMG MPF (p = 0.33), MMG AMP (p = 0.89), or MMG MPF (p = 0.52). Polynomial regression analyses demonstrated substantial inter-individual variability. Inferences made regarding the patterns of neuromuscular responses to fatiguing and bilateral muscle actions should be considered on a subject-by-subject basis

    Gaussian process autoregression for simultaneous proportional multi-modal prosthetic control with natural hand kinematics

    Get PDF
    Matching the dexterity, versatility, and robustness of the human hand is still an unachieved goal in bionics, robotics, and neural engineering. A major limitation for hand prosthetics lies in the challenges of reliably decoding user intention from muscle signals when controlling complex robotic hands. Most of the commercially available prosthetic hands use muscle-related signals to decode a finite number of predefined motions and some offer proportional control of open/close movements of the whole hand. Here, in contrast, we aim to offer users flexible control of individual joints of their artificial hand. We propose a novel framework for decoding neural information that enables a user to independently control 11 joints of the hand in a continuous manner-much like we control our natural hands. Toward this end, we instructed six able-bodied subjects to perform everyday object manipulation tasks combining both dynamic, free movements (e.g., grasping) and isometric force tasks (e.g., squeezing). We recorded the electromyographic and mechanomyographic activities of five extrinsic muscles of the hand in the forearm, while simultaneously monitoring 11 joints of hand and fingers using a sensorized data glove that tracked the joints of the hand. Instead of learning just a direct mapping from current muscle activity to intended hand movement, we formulated a novel autoregressive approach that combines the context of previous hand movements with instantaneous muscle activity to predict future hand movements. Specifically, we evaluated a linear vector autoregressive moving average model with exogenous inputs and a novel Gaussian process (gP) autoregressive framework to learn the continuous mapping from hand joint dynamics and muscle activity to decode intended hand movement. Our gP approach achieves high levels of performance (RMSE of 8°/s and ρ = 0.79). Crucially, we use a small set of sensors that allows us to control a larger set of independently actuated degrees of freedom of a hand. This novel undersensored control is enabled through the combination of nonlinear autoregressive continuous mapping between muscle activity and joint angles. The system evaluates the muscle signals in the context of previous natural hand movements. This enables us to resolve ambiguities in situations, where muscle signals alone cannot determine the correct action as we evaluate the muscle signals in their context of natural hand movements. gP autoregression is a particularly powerful approach which makes not only a prediction based on the context but also represents the associated uncertainty of its predictions, thus enabling the novel notion of risk-based control in neuroprosthetics. Our results suggest that gP autoregressive approaches with exogenous inputs lend themselves for natural, intuitive, and continuous control in neurotechnology, with the particular focus on prosthetic restoration of natural limb function, where high dexterity is required for complex movements

    A Piezoresistive Sensor to Measure Muscle Contraction and Mechanomyography

    Get PDF
    Measurement of muscle contraction is mainly achieved through electromyography (EMG) and is an area of interest for many biomedical applications, including prosthesis control and human machine interface. However, EMG has some drawbacks, and there are also alternative methods for measuring muscle activity, such as by monitoring the mechanical variations that occur during contraction. In this study, a new, simple, non-invasive sensor based on a force-sensitive resistor (FSR) which is able to measure muscle contraction is presented. The sensor, applied on the skin through a rigid dome, senses the mechanical force exerted by the underlying contracting muscles. Although FSR creep causes output drift, it was found that appropriate FSR conditioning reduces the drift by fixing the voltage across the FSR and provides voltage output proportional to force. In addition to the larger contraction signal, the sensor was able to detect the mechanomyogram (MMG), i.e., the little vibrations which occur during muscle contraction. The frequency response of the FSR sensor was found to be large enough to correctly measure the MMG. Simultaneous recordings from flexor carpi ulnaris showed a high correlation (Pearson's r > 0.9) between the FSR output and the EMG linear envelope. Preliminary validation tests on healthy subjects showed the ability of the FSR sensor, used instead of the EMG, to proportionally control a hand prosthesis, achieving comparable performances

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application
    corecore