247 research outputs found

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields

    A Review of EMG Techniques for Detection of Gait Disorders

    Get PDF
    Electromyography (EMG) is a commonly used technique to record myoelectric signals, i.e., motor neuron signals that originate from the central nervous system (CNS) and synergistically activate groups of muscles resulting in movement. EMG patterns underlying movement, recorded using surface or needle electrodes, can be used to detect movement and gait abnormalities. In this review article, we examine EMG signal processing techniques that have been applied for diagnosing gait disorders. These techniques span from traditional statistical tests to complex machine learning algorithms. We particularly emphasize those techniques are promising for clinical applications. This study is pertinent to both medical and engineering research communities and is potentially helpful in advancing diagnostics and designing rehabilitation devices

    Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features

    Get PDF
    The research in myoelectric control systems primarily focuses on extracting discriminative representations from the electromyographic (EMG) signal by designing handcrafted features. Recently, deep learning techniques have been applied to the challenging task of EMG-based gesture recognition. The adoption of these techniques slowly shifts the focus from feature engineering to feature learning. However, the black-box nature of deep learning makes it hard to understand the type of information learned by the network and how it relates to handcrafted features. Additionally, due to the high variability in EMG recordings between participants, deep features tend to generalize poorly across subjects using standard training methods. Consequently, this work introduces a new multi-domain learning algorithm, named ADANN, which significantly enhances (p=0.00004) inter-subject classification accuracy by an average of 19.40% compared to standard training. Using ADANN-generated features, the main contribution of this work is to provide the first topological data analysis of EMG-based gesture recognition for the characterisation of the information encoded within a deep network, using handcrafted features as landmarks. This analysis reveals that handcrafted features and the learned features (in the earlier layers) both try to discriminate between all gestures, but do not encode the same information to do so. Furthermore, using convolutional network visualization techniques reveal that learned features tend to ignore the most activated channel during gesture contraction, which is in stark contrast with the prevalence of handcrafted features designed to capture amplitude information. Overall, this work paves the way for hybrid feature sets by providing a clear guideline of complementary information encoded within learned and handcrafted features.Comment: The first two authors shared first authorship. The last three authors shared senior authorship. 32 page

    Online Muscle Activation Onset Detection Using Likelihood of Conditional Heteroskedasticity of Electromyography Signals

    Get PDF
    Surface electromyography (sEMG) signals are crucial in developing human-machine interfaces, as they contain rich information about human neuromuscular activities. &lt;italic&gt;Objective:&lt;/italic&gt; The real-time, accurate detection of muscle activation onset (MAO) is significant for EMG-triggered control strategies in embedded applications like prostheses and exoskeletons. &lt;italic&gt;Methods:&lt;/italic&gt; This paper investigates sEMG signals using the generalized autoregressive conditional heteroskedasticity (GARCH) model, focusing on variance. A novel feature, the likelihood of conditional heteroskedasticity (LCH) extracted from the maximum likelihood estimation of GARCH parameters, is proposed. This feature effectively distinguishes signal from noise based on heteroskedasticity, allowing for the detection of MAO through the LCH feature and a basic threshold classifier. For online calculation, the model parameter estimation is simplified, enabling direct calculation of the LCH value using fixed parameters. &lt;italic&gt;Results:&lt;/italic&gt; The proposed method was validated on two open-source datasets and demonstrated superior performance over existing methods. The mean absolute error of onset detection, compared with visual detection results, is approximately 65 ms under online conditions, showcasing high accuracy, universality, and noise insensitivity. &lt;italic&gt;Conclusion:&lt;/italic&gt; The results indicate that the proposed method using the LCH feature from the GARCH model is highly effective for real-time detection of muscle activation onset in sEMG signals. &lt;italic&gt;Significance:&lt;/italic&gt; This novel approach shows great potential and possibility for real-world applications, reflecting its superior performance in accuracy, universality, and insensitivity to noise.</p

    Simultaneous prediction of wrist/hand motion via wearable ultrasound sensing

    Get PDF

    STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION

    Get PDF
    To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration

    Vector Autoregressive Hierarchical Hidden Markov Models for Extracting Finger Movements Using Multichannel Surface EMG Signals

    Get PDF
    We present a novel computational technique intended for the robust and adaptable control of a multifunctional prosthetic hand using multichannel surface electromyography. The initial processing of the input data was oriented towards extracting relevant time domain features of the EMG signal. Following the feature calculation, a piecewise modeling of the multidimensional EMG feature dynamics using vector autoregressive models was performed. The next step included the implementation of hierarchical hidden semi-Markov models to capture transitions between piecewise segments of movements and between different movements. Lastly, inversion of the model using an approximate Bayesian inference scheme served as the classifier. The effectiveness of the novel algorithms was assessed versus methods commonly used for real-time classification of EMGs in a prosthesis control application. The obtained results show that using hidden semi-Markov models as the top layer, instead of the hidden Markov models, ranks top in all the relevant metrics among the tested combinations. The choice of the presented methodology for the control of prosthetic hand is also supported by the equal or lower computational complexity required, compared to other algorithms, which enables the implementation on low-power microcontrollers, and the ability to adapt to user preferences of executing individual movements during activities of daily living

    Protective Behavior Detection in Chronic Pain Rehabilitation: From Data Preprocessing to Learning Model

    Get PDF
    Chronic pain (CP) rehabilitation extends beyond physiotherapist-directed clinical sessions and primarily functions in people's everyday lives. Unfortunately, self-directed rehabilitation is difficult because patients need to deal with both their pain and the mental barriers that pain imposes on routine functional activities. Physiotherapists adjust patients' exercise plans and advice in clinical sessions based on the amount of protective behavior (i.e., a sign of anxiety about movement) displayed by the patient. The goal of such modifications is to assist patients in overcoming their fears and maintaining physical functioning. Unfortunately, physiotherapists' support is absent during self-directed rehabilitation or also called self-management that people conduct in their daily life. To be effective, technology for chronic-pain self-management should be able to detect protective behavior to facilitate personalized support. Thereon, this thesis addresses the key challenges of ubiquitous automatic protective behavior detection (PBD). Our investigation takes advantage of an available dataset (EmoPain) containing movement and muscle activity data of healthy people and people with CP engaged in typical everyday activities. To begin, we examine the data augmentation methods and segmentation parameters using various vanilla neural networks in order to enable activity-independent PBD within pre-segmented activity instances. Second, by incorporating temporal and bodily attention mechanisms, we improve PBD performance and support theoretical/clinical understanding of protective behavior that the attention of a person with CP shifts between body parts perceived as risky during feared movements. Third, we use human activity recognition (HAR) to improve continuous PBD in data of various activity types. The approaches proposed above are validated against the ground truth established by majority voting from expert annotators. Unfortunately, using such majority-voted ground truth causes information loss, whereas direct learning from all annotators is vulnerable to noise from disagreements. As the final study, we improve the learning from multiple annotators by leveraging the agreement information for regularization
    • …
    corecore