32 research outputs found

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    AN INVESTIGATION OF ELECTROMYOGRAPHIC (EMG) CONTROL OF DEXTROUS HAND PROSTHESES FOR TRANSRADIAL AMPUTEES

    Get PDF
    In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Plymouth University's products or services.There are many amputees around the world who have lost a limb through conflict, disease or an accident. Upper-limb prostheses controlled using surface Electromyography (sEMG) offer a solution to help the amputees; however, their functionality is limited by the small number of movements they can perform and their slow reaction times. Pattern recognition (PR)-based EMG control has been proposed to improve the functional performance of prostheses. It is a very promising approach, offering intuitive control, fast reaction times and the ability to control a large number of degrees of freedom (DOF). However, prostheses controlled with PR systems are not available for everyday use by amputees, because there are many major challenges and practical problems that need to be addressed before clinical implementation is possible. These include lack of individual finger control, an impractically large number of EMG electrodes, and the lack of deployment protocols for EMG electrodes site selection and movement optimisation. Moreover, the inability of PR systems to handle multiple forces is a further practical problem that needs to be addressed. The main aim of this project is to investigate the research challenges mentioned above via non-invasive EMG signal acquisition, and to propose practical solutions to help amputees. In a series of experiments, the PR systems presented here were tested with EMG signals acquired from seven transradial amputees, which is unique to this project. Previous studies have been conducted using non-amputees. In this work, the challenges described are addressed and a new protocol is proposed that delivers a fast clinical deployment of multi-functional upper limb prostheses controlled by PR systems. Controlling finger movement is a step towards the restoration of lost human capabilities, and is psychologically important, as well as physically. A central thread running through this work is the assertion that no two amputees are the same, each suffering different injuries and retaining differing nerve and muscle structures. This work is very much about individualised healthcare, and aims to provide the best possible solution for each affected individual on a case-by-case basis. Therefore, the approach has been to optimise the solution (in terms of function and reliability) for each individual, as opposed to developing a generic solution, where performance is optimised against a test population. This work is unique, in that it contributes to improving the quality of life for each individual amputee by optimising function and reliability. The main four contributions of the thesis are as follows: 1- Individual finger control was achieved with high accuracy for a large number of finger movements, using six optimally placed sEMG channels. This was validated on EMG signals for ten non-amputee and six amputee subjects. Thumb movements were classified successfully with high accuracy for the first time. The outcome of this investigation will help to add more movements to the prosthesis, and reduce hardware and computational complexity. 2- A new subject-specific protocol for sEMG site selection and reliable movement subset optimisation, based on the amputee’s needs, has been proposed and validated on seven amputees. This protocol will help clinicians to perform an efficient and fast deployment of prostheses, by finding the optimal number and locations of EMG channels. It will also find a reliable subset of movements that can be achieved with high performance. 3- The relationship between the force of contraction and the statistics of EMG signals has been investigated, utilising an experimental design where visual feedback from a Myoelectric Control Interface (MCI) helped the participants to produce the correct level of force. Kurtosis values were found to decrease monotonically when the contraction level increased, thus indicating that kurtosis can be used to distinguish different forces of contractions. 4- The real practical problem of the degradation of classification performance as a result of the variation of force levels during daily use of the prosthesis has been investigated, and solved by proposing a training approach and the use of a robust feature extraction method, based on the spectrum. The recommendations of this investigation improve the practical robustness of prostheses controlled with PR systems and progress a step further towards clinical implementation and improving the quality of life of amputees. The project showed that PR systems achieved a reliable performance for a large number of amputees, taking into account real life issues such as individual finger control for high dexterity, the effect of force level variation, and optimisation of the movements and EMG channels for each individual amputee. The findings of this thesis showed that the PR systems need to be appropriately tuned before usage, such as training with multiple forces to help to reduce the effect of force variation, aiming to improve practical robustness, and also finding the optimal EMG channel for each amputee, to improve the PR system’s performance. The outcome of this research enables the implementation of PR systems in real prostheses that can be used by amputees.Ministry of Higher Education and Scientific Research and Baghdad University- Baghdad/Ira

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Data and Sensor Fusion Using FMG, sEMG and IMU Sensors for Upper Limb Prosthesis Control

    Get PDF
    Whether someone is born with a missing limb or an amputation occurs later in life, living with this disability can be extremely challenging. The robotic prosthetic devices available today are capable of giving users more functionality, but the methods available to control these prostheses restrict their use to simple actions, and are part of the reason why users often reject prosthetic technologies. Using multiple myography modalities has been a promising approach to address these control limitations; however, only two myography modalities have been rigorously tested so far, and while the results have shown improvements, they have not been robust enough for out-of-lab use. In this work, a novel multi-modal device that allows data to be collected from three myography modalities was created. Force myography (FMG), surface electromyography (sEMG), and inertial measurement unit (IMU) sensors were integrated into a wearable armband and used to collect signal data while subjects performed gestures important for the activities of daily living. An established machine learning algorithm was used to decipher the signals to predict the user\u27s intent/gesture being held, which could be used to control a prosthetic device. Using all three modalities provided statistically-significant improvements over most other modality combinations, as it provided the most accurate and consistent classification results. This work provides justification for using three sensing modalities and future work is suggested to explore this modality combination to decipher more complex actions and tasks with more sophisticated pattern recognition algorithms

    Current state of digital signal processing in myoelectric interfaces and related applications

    Get PDF
    This review discusses the critical issues and recommended practices from the perspective of myoelectric interfaces. The major benefits and challenges of myoelectric interfaces are evaluated. The article aims to fill gaps left by previous reviews and identify avenues for future research. Recommendations are given, for example, for electrode placement, sampling rate, segmentation, and classifiers. Four groups of applications where myoelectric interfaces have been adopted are identified: assistive technology, rehabilitation technology, input devices, and silent speech interfaces. The state-of-the-art applications in each of these groups are presented.Peer reviewe

    The Design and Realisation of a 3D-Printed Myoelectric Prosthetic Arm for Toddlers Utilising Soft Grippers

    Get PDF
    A prosthetic device aims to improve an amputee’s ability to perform activities of daily living, by mimicking the function of a biological arm. The use of a prosthesis has also been shown to minimise some of the issues facing amputees, such as poor posture and muscular skeletal pain. Active, myoelectric-controlled prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption in reducing the rejection rates and aiding in proper motor neural development. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with a soft-gripper based end device. The arm has been designed using CAD and 3D-scaning and manufactured using predominantly 3Dprinting techniques. This all serves the aim of reducing cost and lead-time, both crucial aspects for prosthetic manufacturing, particularly with the rapid growth rates of young children. A voluntary opening control system utilising an armband based (surface electromyography) sEMG has been developed concurrently. This simple control system acts as a base for more advanced control structures as the child develops. Grasp tests have resulted in an average effectiveness of 87%, with objects in excess of 400g being securely grasped. Force tests have shown that the arm is performing in line with current adult prosthetic devices. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as viability of toddler-scale 3D-printed myoelectric devices

    Robust Electromyography Based Control of Multifunctional Prostheses of The Upper Extremity

    Get PDF
    Multifunctional, highly dexterous and complex mechanic hand prostheses are emerging and currently entering the market. However, the bottleneck to fully exploiting all capabilities of these mechatronic devices, and to making all available functions controllable reliably and intuitively by the users, remains a considerable challenge. The robustness of scientific methods proposed to overcome this barrier is a crucial factor for their future commercial success. Therefore, in this thesis the matter of robust, multifunctional and dexterous control of prostheses of the upper limb was addressed and some significant advancements in the scientific field were aspired. To this end, several investigations grouped in four studies were conducted, all with the same focus on understanding mechanisms that influence the robustness of myoelectric control and resolving their deteriorating effects. For the first study, a thorough literature review of the field was conducted and it was revealed that many non-stationarities, which could be expected to affect the reliability of surface EMG pattern recognition myoprosthesis control, had been identified and studied previously. However, one significant factor had not been addressed to a sufficient extent: the effect of long-term usage and day-to-day testing. Therefore, a dedicated study was designed and carried out, in order to address the previously unanswered question of how reliable surface electromyography pattern recognition was across days. Eleven subjects, involving both able-bodied and amputees, participated in this study over the course of 5 days, and a pattern recognition system was tested without daily retraining. As the main result of this study, it was revealed that the time between training and testing a classifier was indeed a very relevant factor influencing the classification accuracy. More estimation errors were observed as more time lay between the classifier training and testing. With the insights obtained from the first study, the need for compensating signal non-stationarities was identified. Hence, in a second study, building up on the data obtained from the first investigation, a self-correction mechanism was elaborated. The goal of this approach was to increase the systems robustness towards non-stationarities such as those identified in the first study. The system was capable of detecting and correcting its own mistakes, yielding a better estimation of movements than the uncorrected classification or other, previously proposed strategies for error removal. In the third part of this thesis, the previously investigated ideas for error suppression for increased robustness of a classification based system were extended to regression based movement estimation. While the same method as tested in the second study was not directly applicable to regression, the same underlying idea was used for developing a novel proportional estimator. It was validated in online tests, with the control of physical prostheses by able-bodied and transradial amputee subjects. The proposed method, based on common spatial patterns, outperformed two state-of-the art control methods, demonstrating the benefit of increased robustness in movement estimation during applied tasks. The results showed the superior performance of robust movement estimation in real life investigations, which would have hardly been observable in offline or abstract cursor control tests, underlining the importance of tests with physical prostheses. In the last part of this work, the limitation of sequential movements of the previously explored system was addressed and a methodology for enhancing the system with simultaneous and proportional control was developed. As a result of these efforts, a system robust, natural and fluent in its movements was conceived. Again, online control tests of physical prostheses were performed by able-bodied and amputee subjects, and the novel system proved to outperform the sequential controller of the third study of this thesis, yielding the best control technique tested. An extensive set of tests was conducted with both able-bodied and amputee subjects, in scenarios close to clinical routine. Custom prosthetic sockets were manufactured for all subjects, allowing for experimental control of multifunction prostheses with advanced machine learning based algorithms in real-life scenarios. The tests involved grasping and manipulating objects, in ways as they are often encountered in everyday living. Similar investigations had not been conducted before. One of the main conclusions of this thesis was that the suppression of wrong prosthetic motions was a key factor for robust prosthesis control and that simultaneous wrist control was a beneficial asset especially for experienced users. As a result of all investigations performed, clinically relevant conclusions were drawn from these tests, maximizing the impact of the developed systems on potential future commercialization of the newly conceived control methods. This was emphasized by the close collaboration with Otto Bock as an industrial partner of the AMYO project and hence this work.2016-02-2

    Distributed Sensing and Stimulation Systems Towards Sense of Touch Restoration in Prosthetics

    Get PDF
    Modern prostheses aim at restoring the functional and aesthetic characteristics of the lost limb. To foster prosthesis embodiment and functionality, it is necessary to restitute both volitional control and sensory feedback. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing high-fidelity spatial information. To provide this type of feedback in prosthetics, it is necessary to sense tactile information from artificial skin placed on the prosthesis and transmit tactile feedback above the amputation in order to map the interaction between the prosthesis and the environment. This thesis proposes the integration of distributed sensing systems (e-skin) to acquire tactile sensation, and non-invasive multichannel electrotactile feedback and virtual reality to deliver high-bandwidth information to the user. Its core focus addresses the development and testing of close-loop sensory feedback human-machine interface, based on the latest distributed sensing and stimulation techniques for restoring the sense of touch in prosthetics. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives and the used methodology and contributions; as well as three studies distributed over stimulation system level and sensing system level. The first study presents the development of close-loop compensatory tracking system to evaluate the usability and effectiveness of electrotactile sensory feedback in enabling real-time close-loop control in prosthetics. It examines and compares the subject\u2019s adaptive performance and tolerance to random latencies while performing the dynamic control task (i.e. position control) and simultaneously receiving either visual feedback or electrotactile feedback for communicating the momentary tracking error. Moreover, it reported the minimum time delay needed for an abrupt impairment of users\u2019 performance. The experimental results have shown that electrotactile feedback performance is less prone to changes with longer delays. However, visual feedback drops faster than electrotactile with increased time delays. This is a good indication for the effectiveness of electrotactile feedback in enabling close- loop control in prosthetics, since some delays are inevitable. The second study describes the development of a novel non-invasive compact multichannel interface for electrotactile feedback, containing 24 pads electrode matrix, with fully programmable stimulation unit, that investigates the ability of able-bodied human subjects to localize the electrotactile stimulus delivered through the electrode matrix. Furthermore, it designed a novel dual parameter -modulation (interleaved frequency and intensity) and compared it to conventional stimulation (same frequency for all pads). In addition and for the first time, it compared the electrotactile stimulation to mechanical stimulation. More, it exposes the integration of virtual prosthesis with the developed system in order to achieve better user experience and object manipulation through mapping the acquired real-time collected tactile data and feedback it simultaneously to the user. The experimental results demonstrated that the proposed interleaved coding substantially improved the spatial localization compared to same-frequency stimulation. Furthermore, it showed that same-frequency stimulation was equivalent to mechanical stimulation, whereas the performance with dual-parameter modulation was significantly better. The third study presents the realization of a novel, flexible, screen- printed e-skin based on P(VDF-TrFE) piezoelectric polymers, that would cover the fingertips and the palm of the prosthetic hand (particularly the Michelangelo hand by Ottobock) and an assistive sensorized glove for stroke patients. Moreover, it developed a new validation methodology to examine the sensors behavior while being solicited. The characterization results showed compatibility between the expected (modeled) behavior of the electrical response of each sensor to measured mechanical (normal) force at the skin surface, which in turn proved the combination of both fabrication and assembly processes was successful. This paves the way to define a practical, simplified and reproducible characterization protocol for e-skin patches In conclusion, by adopting innovative methodologies in sensing and stimulation systems, this thesis advances the overall development of close-loop sensory feedback human-machine interface used for restoration of sense of touch in prosthetics. Moreover, this research could lead to high-bandwidth high-fidelity transmission of tactile information for modern dexterous prostheses that could ameliorate the end user experience and facilitate it acceptance in the daily life

    Machine learning-based dexterous control of hand prostheses

    Get PDF
    Upper-limb myoelectric prostheses are controlled by muscle activity information recorded on the skin surface using electromyography (EMG). Intuitive prosthetic control can be achieved by deploying statistical and machine learning (ML) tools to decipher the user’s movement intent from EMG signals. This thesis proposes various means of advancing the capabilities of non-invasive, ML-based control of myoelectric hand prostheses. Two main directions are explored, namely classification-based hand grip selection and proportional finger position control using regression methods. Several practical aspects are considered with the aim of maximising the clinical impact of the proposed methodologies, which are evaluated with offline analyses as well as real-time experiments involving both able-bodied and transradial amputee participants. It has been generally accepted that the EMG signal may not always be a reliable source of control information for prostheses, mainly due to its stochastic and non-stationary properties. One particular issue associated with the use of surface EMG signals for upper-extremity myoelectric control is the limb position effect, which is related to the lack of decoding generalisation under novel arm postures. To address this challenge, it is proposed to make concurrent use of EMG sensors and inertial measurement units (IMUs). It is demonstrated this can lead to a significant improvement in both classification accuracy (CA) and real-time prosthetic control performance. Additionally, the relationship between surface EMG and inertial measurements is investigated and it is found that these modalities are partially related due to reflecting different manifestations of the same underlying phenomenon, that is, the muscular activity. In the field of upper-limb myoelectric control, the linear discriminant analysis (LDA) classifier has arguably been the most popular choice for movement intent decoding. This is mainly attributable to its ease of implementation, low computational requirements, and acceptable decoding performance. Nevertheless, this particular method makes a strong fundamental assumption, that is, data observations from different classes share a common covariance structure. Although this assumption may often be violated in practice, it has been found that the performance of the method is comparable to that of more sophisticated algorithms. In this thesis, it is proposed to remove this assumption by making use of general class-conditional Gaussian models and appropriate regularisation to avoid overfitting issues. By performing an exhaustive analysis on benchmark datasets, it is demonstrated that the proposed approach based on regularised discriminant analysis (RDA) can offer an impressive increase in decoding accuracy. By combining the use of RDA classification with a novel confidence-based rejection policy that intends to minimise the rate of unintended hand motions, it is shown that it is feasible to attain robust myoelectric grip control of a prosthetic hand by making use of a single pair of surface EMG-IMU sensors. Most present-day commercial prosthetic hands offer the mechanical abilities to support individual digit control; however, classification-based methods can only produce pre-defined grip patterns, a feature which results in prosthesis under-actuation. Although classification-based grip control can provide a great advantage over conventional strategies, it is far from being intuitive and natural to the user. A potential way of approaching the level of dexterity enjoyed by the human hand is via continuous and individual control of multiple joints. To this end, an exhaustive analysis is performed on the feasibility of reconstructing multidimensional hand joint angles from surface EMG signals. A supervised method based on the eigenvalue formulation of multiple linear regression (MLR) is then proposed to simultaneously reduce the dimensionality of input and output variables and its performance is compared to that of typically used unsupervised methods, which may produce suboptimal results in this context. An experimental paradigm is finally designed to evaluate the efficacy of the proposed finger position control scheme during real-time prosthesis use. This thesis provides insight into the capacity of deploying a range of computational methods for non-invasive myoelectric control. It contributes towards developing intuitive interfaces for dexterous control of multi-articulated prosthetic hands by transradial amputees
    corecore