35 research outputs found

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Machine learning-based dexterous control of hand prostheses

    Get PDF
    Upper-limb myoelectric prostheses are controlled by muscle activity information recorded on the skin surface using electromyography (EMG). Intuitive prosthetic control can be achieved by deploying statistical and machine learning (ML) tools to decipher the user’s movement intent from EMG signals. This thesis proposes various means of advancing the capabilities of non-invasive, ML-based control of myoelectric hand prostheses. Two main directions are explored, namely classification-based hand grip selection and proportional finger position control using regression methods. Several practical aspects are considered with the aim of maximising the clinical impact of the proposed methodologies, which are evaluated with offline analyses as well as real-time experiments involving both able-bodied and transradial amputee participants. It has been generally accepted that the EMG signal may not always be a reliable source of control information for prostheses, mainly due to its stochastic and non-stationary properties. One particular issue associated with the use of surface EMG signals for upper-extremity myoelectric control is the limb position effect, which is related to the lack of decoding generalisation under novel arm postures. To address this challenge, it is proposed to make concurrent use of EMG sensors and inertial measurement units (IMUs). It is demonstrated this can lead to a significant improvement in both classification accuracy (CA) and real-time prosthetic control performance. Additionally, the relationship between surface EMG and inertial measurements is investigated and it is found that these modalities are partially related due to reflecting different manifestations of the same underlying phenomenon, that is, the muscular activity. In the field of upper-limb myoelectric control, the linear discriminant analysis (LDA) classifier has arguably been the most popular choice for movement intent decoding. This is mainly attributable to its ease of implementation, low computational requirements, and acceptable decoding performance. Nevertheless, this particular method makes a strong fundamental assumption, that is, data observations from different classes share a common covariance structure. Although this assumption may often be violated in practice, it has been found that the performance of the method is comparable to that of more sophisticated algorithms. In this thesis, it is proposed to remove this assumption by making use of general class-conditional Gaussian models and appropriate regularisation to avoid overfitting issues. By performing an exhaustive analysis on benchmark datasets, it is demonstrated that the proposed approach based on regularised discriminant analysis (RDA) can offer an impressive increase in decoding accuracy. By combining the use of RDA classification with a novel confidence-based rejection policy that intends to minimise the rate of unintended hand motions, it is shown that it is feasible to attain robust myoelectric grip control of a prosthetic hand by making use of a single pair of surface EMG-IMU sensors. Most present-day commercial prosthetic hands offer the mechanical abilities to support individual digit control; however, classification-based methods can only produce pre-defined grip patterns, a feature which results in prosthesis under-actuation. Although classification-based grip control can provide a great advantage over conventional strategies, it is far from being intuitive and natural to the user. A potential way of approaching the level of dexterity enjoyed by the human hand is via continuous and individual control of multiple joints. To this end, an exhaustive analysis is performed on the feasibility of reconstructing multidimensional hand joint angles from surface EMG signals. A supervised method based on the eigenvalue formulation of multiple linear regression (MLR) is then proposed to simultaneously reduce the dimensionality of input and output variables and its performance is compared to that of typically used unsupervised methods, which may produce suboptimal results in this context. An experimental paradigm is finally designed to evaluate the efficacy of the proposed finger position control scheme during real-time prosthesis use. This thesis provides insight into the capacity of deploying a range of computational methods for non-invasive myoelectric control. It contributes towards developing intuitive interfaces for dexterous control of multi-articulated prosthetic hands by transradial amputees

    Technology for monitoring everyday prosthesis use: a systematic review

    Get PDF
    BACKGROUND Understanding how prostheses are used in everyday life is central to the design, provision and evaluation of prosthetic devices and associated services. This paper reviews the scientific literature on methodologies and technologies that have been used to assess the daily use of both upper- and lower-limb prostheses. It discusses the types of studies that have been undertaken, the technologies used to monitor physical activity, the benefits of monitoring daily living and the barriers to long-term monitoring. METHODS A systematic literature search was conducted in PubMed, Web of Science, Scopus, CINAHL and EMBASE of studies that monitored the activity of prosthesis-users during daily-living. RESULTS 60 lower-limb studies and 9 upper-limb studies were identified for inclusion in the review. The first studies in the lower-limb field date from the 1990s and the number has increased steadily since the early 2000s. In contrast, the studies in the upper-limb field have only begun to emerge over the past few years. The early lower-limb studies focused on the development or validation of actimeters, algorithms and/or scores for activity classification. However, most of the recent lower-limb studies used activity monitoring to compare prosthetic components. The lower-limb studies mainly used step-counts as their only measure of activity, focusing on the amount of activity, not the type and quality of movements. In comparison, the small number of upper-limb studies were fairly evenly spread between development of algorithms, comparison of everyday activity to clinical scores, and comparison of different prosthesis user populations. Most upper-limb papers reported the degree of symmetry in activity levels between the arm with the prosthesis and the intact arm. CONCLUSIONS Activity monitoring technology used in conjunction with clinical scores and user feedback, offers significant insights into how prostheses are used and whether they meet the user’s requirements. However, the cost, limited battery-life and lack of availability in many countries mean that using sensors to understand the daily use of prostheses and the types of activity being performed has not yet become a feasible standard clinical practice. This review provides recommendations for the research and clinical communities to advance this area for the benefit of prosthesis users

    Multi-modal EMG-based hand gesture classification for the control of a robotic prosthetic hand

    Get PDF
    Upper-limb myoelectric prosthesis control utilises electromyography (EMG) signals as input and applies statistical and machine learning techniques to intuitively identify the user’s intended grasp. Surface EMG signals recorded with electrodes attached on the user’s skin have been successfully used for prostheses control in controlled lab conditions for decades. However, due to the stochastic and non-stationary nature of the EMG signal, clinical use of pattern recognition myoelectric control in everyday life conditions is limited. This thesis performs an extensive literature review presenting the main causes of the drift of EMG signals over time, ways of detecting such drifts and possible techniques to counteract for their effects in the application of upper limb prostheses. Three approaches are investigated to provide more robust classification performance under conditions of EMG signal drift; improving the classifier, in corporating extra sensory modalities and utilising transfer learning techniques to improve between-subjects classification performance. Linear Discriminant Analysis (LDA), is the baseline algorithm in myoelectric grasp classification applications, providing good performance with low computational requirements. However, it assumes Gaussian distribution and shared co-variance between different classes, and its performance relies on hand-engineered features. Deep Neural Networks (DNNs) have the advantage of learning the features while training the classifier. In this thesis two deep learning models have been successfully implemented for the grasp classification of EMG signals achieving better performance than the baseline LDA algorithm. Moreover, deep neural networks provide an easy basis for transfer learning knowledge and improving the adaptation capabilities of the classifier. An adaptation approach is suggested and tested on the inter-subject classification task, demonstrating better performance when utilising pre-trained neural networks. Finally research has suggested that adding extra sensory modalities along EMG, like Inertial Measurement Unit (IMU) data, improves the classification performance of a classifier in comparison to utilising only EMG data for training. In this thesis ways of incorporating different sensory modalities have been suggested, both for the LDA classifier and the DNNs, demonstrating the benefit of multi-modal grasp classifier.The Edinburgh Centre for Robotics and EPSR

    Longitudinal tracking of physiological state with electromyographic signals.

    Get PDF
    Electrophysiological measurements have been used in recent history to classify instantaneous physiological configurations, e.g., hand gestures. This work investigates the feasibility of working with changes in physiological configurations over time (i.e., longitudinally) using a variety of algorithms from the machine learning domain. We demonstrate a high degree of classification accuracy for a binary classification problem derived from electromyography measurements before and after a 35-day bedrest. The problem difficulty is increased with a more dynamic experiment testing for changes in astronaut sensorimotor performance by taking electromyography and force plate measurements before, during, and after a jump from a small platform. A LASSO regularization is performed to observe changes in relationship between electromyography features and force plate outcomes. SVM classifiers are employed to correctly identify the times at which these experiments are performed, which is important as these indicate a trajectory of adaptation

    Usability of Upper Limb Electromyogram Features as Muscle Fatigue Indicators for Better Adaptation of Human-Robot Interactions

    Get PDF
    Human-robot interaction (HRI) is the process of humans and robots working together to accomplish a goal with the objective of making the interaction beneficial to humans. Closed loop control and adaptability to individuals are some of the important acceptance criteria for human-robot interaction systems. While designing an HRI interaction scheme, it is important to understand the users of the system and evaluate the capabilities of humans and robots. An acceptable HRI solution is expected to be adaptable by detecting and responding to the changes in the environment and its users. Hence, an adaptive robotic interaction will require a better sensing of the human performance parameters. Human performance is influenced by the state of muscular and mental fatigue during active interactions. Researchers in the field of human-robot interaction have been trying to improve the adaptability of the environment according to the physical state of the human participants. Existing human-robot interactions and robot assisted trainings are designed without sufficiently considering the implications of fatigue to the users. Given this, identifying if better outcome can be achieved during a robot-assisted training by adapting to individual muscular status, i.e. with respect to fatigue, is a novel area of research. This has potential applications in scenarios such as rehabilitation robotics. Since robots have the potential to deliver a large number of repetitions, they can be used for training stroke patients to improve their muscular disabilities through repetitive training exercises. The objective of this research is to explore a solution for a longer and less fatiguing robot-assisted interaction, which can adapt based on the muscular state of participants using fatigue indicators derived from electromyogram (EMG) measurements. In the initial part of this research, fatigue indicators from upper limb muscles of healthy participants were identified by analysing the electromyogram signals from the muscles as well as the kinematic data collected by the robot. The tasks were defined to have point-to-point upper limb movements, which involved dynamic muscle contractions, while interacting with the HapticMaster robot. The study revealed quantitatively, which muscles were involved in the exercise and which muscles were more fatigued. The results also indicated the potential of EMG and kinematic parameters to be used as fatigue indicators. A correlation analysis between EMG features and kinematic parameters revealed that the correlation coefficient was impacted by muscle fatigue. As an extension of this study, the EMG collected at the beginning of the task was also used to predict the type of point-to-point movements using a supervised machine learning algorithm based on Support Vector Machines. The results showed that the movement intention could be detected with a reasonably good accuracy within the initial milliseconds of the task. The final part of the research implemented a fatigue-adaptive algorithm based on the identified EMG features. An experiment was conducted with thirty healthy participants to test the effectiveness of this adaptive algorithm. The participants interacted with the HapticMaster robot following a progressive muscle strength training protocol similar to a standard sports science protocol for muscle strengthening. The robotic assistance was altered according to the muscular state of participants, and, thus, offering varying difficulty levels based on the states of fatigue or relaxation, while performing the tasks. The results showed that the fatigue-based robotic adaptation has resulted in a prolonged training interaction, that involved many repetitions of the task. This study showed that using fatigue indicators, it is possible to alter the level of challenge, and thus, increase the interaction time. In summary, the research undertaken during this PhD has successfully enhanced the adaptability of human-robot interaction. Apart from its potential use for muscle strength training in healthy individuals, the work presented in this thesis is applicable in a wide-range of humanmachine interaction research such as rehabilitation robotics. This has a potential application in robot-assisted upper limb rehabilitation training of stroke patients

    Putting artificial intelligence into wearable human-machine interfaces – towards a generic, self-improving controller

    Get PDF
    The standard approach to creating a machine learning based controller is to provide users with a number of gestures that they need to make; record multiple instances of each gesture using specific sensors; extract the relevant sensor data and pass it through a supervised learning algorithm until the algorithm can successfully identify the gestures; map each gesture to a control signal that performs a desired outcome. This approach is both inflexible and time consuming. The primary contribution of this research was to investigate a new approach to putting artificial intelligence into wearable human-machine interfaces by creating a Generic, Self-Improving Controller. It was shown to learn two user-defined static gestures with an accuracy of 100% in less than 10 samples per gesture; three in less than 20 samples per gesture; and four in less than 35 samples per gesture. Pre-defined dynamic gestures were more difficult to learn. It learnt two with an accuracy of 90% in less than 6,000 samples per gesture; and four with an accuracy of 70% after 50,000 samples per gesture. The research has resulted in a number of additional contributions: • The creation of a source-independent hardware data capture, processing, fusion and storage tool for standardising the capture and storage of historical copies of data captured from multiple different sensors. • An improved Attitude and Heading Reference System (AHRS) algorithm for calculating orientation quaternions that is five orders of magnitude more precise. • The reformulation of the regularised TD learning algorithm; the reformulation of the TD learning algorithm applied the artificial neural network back-propagation algorithm; and the combination of the reformulations into a new, regularised TD learning algorithm applied to the artificial neural network back-propagation algorithm. • The creation of a Generic, Self-Improving Predictor that can use different learning algorithms and a Flexible Artificial Neural Network.Open Acces
    corecore