Data and Sensor Fusion Using FMG, sEMG and IMU Sensors for Upper Limb Prosthesis Control

Abstract

Whether someone is born with a missing limb or an amputation occurs later in life, living with this disability can be extremely challenging. The robotic prosthetic devices available today are capable of giving users more functionality, but the methods available to control these prostheses restrict their use to simple actions, and are part of the reason why users often reject prosthetic technologies. Using multiple myography modalities has been a promising approach to address these control limitations; however, only two myography modalities have been rigorously tested so far, and while the results have shown improvements, they have not been robust enough for out-of-lab use. In this work, a novel multi-modal device that allows data to be collected from three myography modalities was created. Force myography (FMG), surface electromyography (sEMG), and inertial measurement unit (IMU) sensors were integrated into a wearable armband and used to collect signal data while subjects performed gestures important for the activities of daily living. An established machine learning algorithm was used to decipher the signals to predict the user\u27s intent/gesture being held, which could be used to control a prosthetic device. Using all three modalities provided statistically-significant improvements over most other modality combinations, as it provided the most accurate and consistent classification results. This work provides justification for using three sensing modalities and future work is suggested to explore this modality combination to decipher more complex actions and tasks with more sophisticated pattern recognition algorithms

    Similar works