212 research outputs found

    Robust Signal Processing Techniques for Wearable Inertial Measurement Unit (IMU) Sensors

    Get PDF
    Activity and gesture recognition using wearable motion sensors, also known as inertial measurement units (IMUs), provides important context for many ubiquitous sensing applications including healthcare monitoring, human computer interface and context-aware smart homes and offices. Such systems are gaining popularity due to their minimal cost and ability to provide sensing functionality at any time and place. However, several factors can affect the system performance such as sensor location and orientation displacement, activity and gesture inconsistency, movement speed variation and lack of tiny motion information. This research is focused on developing signal processing solutions to ensure the system robustness with respect to these factors. Firstly, for existing systems which have already been designed to work with certain sensor orientation/location, this research proposes opportunistic calibration algorithms leveraging camera information from the environment to ensure the system performs correctly despite location or orientation displacement of the sensors. The calibration algorithms do not require extra effort from the users and the calibration is done seamlessly when the users present in front of an environmental camera and perform arbitrary movements. Secondly, an orientation independent and speed independent approach is proposed and studied by exploring a novel orientation independent feature set and by intelligently selecting only the relevant and consistent portions of various activities and gestures. Thirdly, in order to address the challenge that the IMU is not able capture tiny motion which is important to some applications, a sensor fusion framework is proposed to fuse the complementary sensor modality in order to enhance the system performance and robustness. For example, American Sign Language has a large vocabulary of signs and a recognition system solely based on IMU sensors would not perform very well. In order to demonstrate the feasibility of sensor fusion techniques, a robust real-time American Sign Language recognition approach is developed using wrist worn IMU and surface electromyography (EMG) sensors

    Robust Signal Processing Techniques for Wearable Inertial Measurement Unit (IMU) Sensors

    Get PDF
    Activity and gesture recognition using wearable motion sensors, also known as inertial measurement units (IMUs), provides important context for many ubiquitous sensing applications including healthcare monitoring, human computer interface and context-aware smart homes and offices. Such systems are gaining popularity due to their minimal cost and ability to provide sensing functionality at any time and place. However, several factors can affect the system performance such as sensor location and orientation displacement, activity and gesture inconsistency, movement speed variation and lack of tiny motion information. This research is focused on developing signal processing solutions to ensure the system robustness with respect to these factors. Firstly, for existing systems which have already been designed to work with certain sensor orientation/location, this research proposes opportunistic calibration algorithms leveraging camera information from the environment to ensure the system performs correctly despite location or orientation displacement of the sensors. The calibration algorithms do not require extra effort from the users and the calibration is done seamlessly when the users present in front of an environmental camera and perform arbitrary movements. Secondly, an orientation independent and speed independent approach is proposed and studied by exploring a novel orientation independent feature set and by intelligently selecting only the relevant and consistent portions of various activities and gestures. Thirdly, in order to address the challenge that the IMU is not able capture tiny motion which is important to some applications, a sensor fusion framework is proposed to fuse the complementary sensor modality in order to enhance the system performance and robustness. For example, American Sign Language has a large vocabulary of signs and a recognition system solely based on IMU sensors would not perform very well. In order to demonstrate the feasibility of sensor fusion techniques, a robust real-time American Sign Language recognition approach is developed using wrist worn IMU and surface electromyography (EMG) sensors

    The Feasibility of Wearable Sensors for the Automation of Distal Upper Extremity Ergonomic Assessment Tools

    Get PDF
    Work-related distal upper limb musculoskeletal disorders are costly conditions that many companies and researchers spend significant resources on preventing. Ergonomic assessments evaluate the risk of developing a work-related musculoskeletal disorder (WMSD) by quantifying variables such as the force, repetition, and posture (among others) that the task requires. Accurate and objective measurements of force and posture are challenging due to equipment and location constraints. Wearable sensors like the Delsys Trigno Quattro combine inertial measurement units (IMUs) and surface electromyography to solve collection difficulties. The purpose of this work was to evaluate the joint angle estimation of IMUs and the relationship between sEMG and overall task intensity throughout a controlled wrist motion. Using a 3 degrees-of-freedom wrist manipulandum, the feasibility of a small, lightweight wearable was evaluated to collect accurate wrist flexion and extension angles and to use sEMG to quantify task intensity. The task was a repeated 95Âș arc in flexion/ extension with six combinations of wrist torques and grip requirements. The mean wrist angle difference (throughout the range of motion) between the WristBot and the IMU of 1.70° was not significant (p= 0.057); but significant differences existed throughout the range of motion. The largest difference between the IMU and the WristBot was 10.7° at 40° extension; this discrepancy is smaller than typical visual inspection joint angle estimate errors by ergonomists of 15.6°. All sEMG metrics (flexor muscle root mean square (RMS), extensor muscle RMS, mean RMS, integrated sEMG (iEMG), physiological cross-sectional area weighted RMS) and ratings of perceived exertion (RPE) had significant regression results with the task intensity. Variance in RPE was better explained by task intensity than the best sEMG metric (iEMG) with R2 values of 0.35 and 0.21, respectively. Wearable sensors can be used in occupational settings to increase the accuracy of postural assessments; additional research is required on relationships between sEMG and task intensity to be used effectively in ergonomics. There is potential for sEMG to be a powerful tool; however, the dynamic nature and combined exertion (grip and flexion/ extension) make it difficult to quantify task intensit

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Review on EMG Acquisition and Classification Techniques: Towards Zero Retraining in the Influence of User and Arm Position Independence

    Get PDF
    The surface electromyogram (EMG) is widely studied and applied in machine control. Recent methods of classifying hand gestures reported classification rates of over 95%. However, the majority of the studies made were performed on a single user, focusing solely on the gesture classification. These studies are restrictive in practical sense: either focusing on just gestures, multi-user compatibility, or rotation independence. The variations in EMG signals due to these conditions present a challenge to the practical application of EMG devices, often requiring repetitious training per application. To the best of our knowledge, there is little comprehensive review of works done in EMG classification in the combined influence of user-independence, rotation and hand exchange. Therefore, in this paper we present a review of works related to the practical issues of EMG with a focus on the EMG placement, and recent acquisition and computing techniques to reduce training. First, we provided an overview of existing electrode placement schemes. Secondly, we compared the techniques and results of single-subject against multi-subject, multi-position settings. As a conclusion, the study of EMG classification in this direction is relatively new. However the results are encouraging and strongly indicate that EMG classification in a broad range of people and tolerance towards arm orientation is possible, and can pave way for more flexible EMG devices

    First validation of a novel assessgame quantifying selective voluntary motor control in children with upper motor neuron lesions

    Get PDF
    Julia Balzer - ORCID 0000-0001-7139-229X https://orcid.org/0000-0001-7139-229XThe question whether novel rehabilitation interventions can exploit restorative rather than compensatory mechanisms has gained momentum in recent years. Assessments measuring selective voluntary motor control could answer this question. However, while current clinical assessments are ordinal-scaled, which could affect their sensitivity, lab-based assessments are costly and time-consuming. We propose a novel, interval-scaled, computer-based assessment game using low-cost accelerometers to evaluate selective voluntary motor control. Participants steer an avatar owl on a star-studded path by moving the targeted joint of the upper or lower extremities. We calculate a target joint accuracy metric, and an outcome score for the frequency and amplitude of involuntary movements of adjacent and contralateral joints as well as the trunk. We detail the methods and, as a first proof of concept, relate the results of select children with upper motor neuron lesions (n = 48) to reference groups of neurologically intact children (n = 62) and adults (n = 64). Linear mixed models indicated that the cumulative therapist score, rating the degree of selectivity, was a good predictor of the involuntary movements outcome score. This highlights the validity of this assessgame approach to quantify selective voluntary motor control and warrants a more thorough exploration to quantify changes induced by restorative interventions.This work was supported by the Swiss National Science Foundation (Grant numbers 32003B_156646 and 32003B_179471)https://doi.org/10.1038/s41598-019-56495-89pubpu

    The "Federica" hand: a simple, very efficient prothesis

    Get PDF
    Hand prostheses partially restore hand appearance and functionalities. Not everyone can afford expensive prostheses and many low-cost prostheses have been proposed. In particular, 3D printers have provided great opportunities by simplifying the manufacturing process and reducing costs. Generally, active prostheses use multiple motors for fingers movement and are controlled by electromyographic (EMG) signals. The "Federica" hand is a single motor prosthesis, equipped with an adaptive grasp and controlled by a force-myographic signal. The "Federica" hand is 3D printed and has an anthropomorphic morphology with five fingers, each consisting of three phalanges. The movement generated by a single servomotor is transmitted to the fingers by inextensible tendons that form a closed chain; practically, no springs are used for passive hand opening. A differential mechanical system simultaneously distributes the motor force in predefined portions on each finger, regardless of their actual positions. Proportional control of hand closure is achieved by measuring the contraction of residual limb muscles by means of a force sensor, replacing the EMG. The electrical current of the servomotor is monitored to provide the user with a sensory feedback of the grip force, through a small vibration motor. A simple Arduino board was adopted as processing unit. The differential mechanism guarantees an efficient transfer of mechanical energy from the motor to the fingers and a secure grasp of any object, regardless of its shape and deformability. The force sensor, being extremely thin, can be easily embedded into the prosthesis socket and positioned on both muscles and tendons; it offers some advantages over the EMG as it does not require any electrical contact or signal processing to extract information about the muscle contraction intensity. The grip speed is high enough to allow the user to grab objects on the fly: from the muscle trigger until to the complete hand closure, "Federica" takes about half a second. The cost of the device is about 100 US$. Preliminary tests carried out on a patient with transcarpal amputation, showed high performances in controlling the prosthesis, after a very rapid training session. The "Federica" hand turned out to be a lightweight, low-cost and extremely efficient prosthesis. The project is intended to be open-source: all the information needed to produce the prosthesis (e.g. CAD files, circuit schematics, software) can be downloaded from a public repository. Thus, allowing everyone to use the "Federica" hand and customize or improve it

    Motion Intention Estimation using sEMG-ACC Sensor Fusion

    Get PDF
    Musculoskeletal injuries can severely impact the ability to produce and control body motion. In order to regain function, rehabilitation is often required. Wearable smart devices are currently under development to provide therapy and assistance for people with impaired arm function. Electromyography (EMG) signals are used as an input to pattern recognition systems to determine intended movements. However, there is a gap between the accuracy of pattern recognition systems in constrained laboratory settings, and usability when used for detecting dynamic unconstrained movements. Motion factors such as limb position, interaction force, and velocity, are known to have a negative impact on the pattern recognition. A possible solution lies in the use of data from other sensors along with the EMG signals, such as signals from accelerometers (ACC), in the training and use of classifiers in order to improve classification accuracy. The objectives of this study were to quantify the impact of motion factors on ACC signals, and to use these ACC signals along with EMG signals for classifying categories of motion factors. To address these objectives, a dataset containing EMG and ACC signals while individuals performed unconstrained arm motions was studied. Analyses of the EMG and accelerometer signals and their use in training classification models to predict characteristics of intended motion were completed. The results quantify how accelerometer features change with variations in arm position, interaction forces, and motion velocities. The results also show that the combination of EMG and ACC data have relatively increased the accuracy of motion intention detection. Velocity could be distinguished between stationary and moving with less than 10% error using a Decision Tree ensemble classifier. Future work should expand on motion factors and EMG-ACC sensor fusion to identify interactions between a person and the environment, in order to guide tuning of control models working towards controlling wearable mechatronic devices during dynamic movements
    • 

    corecore