420 research outputs found

    A wireless sEMG-based body-machine interface for assistive technology devices

    Get PDF
    Assistive technology (AT) tools and appliances are being more and more widely used and developed worldwide to improve the autonomy of people living with disabilities and ease the interaction with their environment. This paper describes an intuitive and wireless surface electromyography (sEMG) based body-machine interface for AT tools. Spinal cord injuries at C5-C8 levels affect patients' arms, forearms, hands, and fingers control. Thus, using classical AT control interfaces (keypads, joysticks, etc.) is often difficult or impossible. The proposed system reads the AT users' residual functional capacities through their sEMG activity, and converts them into appropriate commands using a threshold-based control algorithm. It has proven to be suitable as a control alternative for assistive devices and has been tested with the JACO arm, an articulated assistive device of which the vocation is to help people living with upper-body disabilities in their daily life activities. The wireless prototype, the architecture of which is based on a 3-channel sEMG measurement system and a 915-MHz wireless transceiver built around a low-power microcontroller, uses low-cost off-the-shelf commercial components. The embedded controller is compared with JACO's regular joystick-based interface, using combinations of forearm, pectoral, masseter, and trapeze muscles. The measured index of performance values is 0.88, 0.51, and 0.41 bits/s, respectively, for correlation coefficients with the Fitt's model of 0.75, 0.85, and 0.67. These results demonstrate that the proposed controller offers an attractive alternative to conventional interfaces, such as joystick devices, for upper-body disabled people using ATs such as JACO

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems

    JNER at 15 years: analysis of the state of neuroengineering and rehabilitation.

    Get PDF
    On JNER's 15th anniversary, this editorial analyzes the state of the field of neuroengineering and rehabilitation. I first discuss some ways that the nature of neurorehabilitation research has evolved in the past 15 years based on my perspective as editor-in-chief of JNER and a researcher in the field. I highlight increasing reliance on advanced technologies, improved rigor and openness of research, and three, related, new paradigms - wearable devices, the Cybathlon competition, and human augmentation studies - indicators that neurorehabilitation is squarely in the age of wearability. Then, I briefly speculate on how the field might make progress going forward, highlighting the need for new models of training and learning driven by big data, better personalization and targeting, and an increase in the quantity and quality of usability and uptake studies to improve translation

    iMOVE: Development of a hybrid control interface based on sEMG and movement signals for an assistive robotic manipulator

    Get PDF
    For many people with upper limb disabilities, simple activities of daily living such as drinking, opening a door, or pushing an elevator button require the assistance of a caregiver; which reduces the independence of the individual. Assistive robotic systems controlled via human-robot interface could enable these people to perform this kind of tasks autonomously again and thereby increase their independence and quality of life. Moreover, this interface could encourage rehabilitation of motor functions because the individual would require to perform its remaining body movements and muscle activity to provide control signals. This project aims at developing a novel hybrid control interface that combines remaining movements and muscle activity of the upper body to control position and impedance of a robotic manipulator. This thesis presents a Cartesian position control system for KINOVA Gen3 robotic arm, which performs a proportional-derivative control low based to the Jacobian transpose method, that does not require inverse kinematics. A second control is proposed to change the robot’s rigidity in real-time based on measurements of muscle activity (sEMG). This control allows the user to modulate the robot’s impedance while performing a task. Moreover, it presents a body-machine interface that maps the motions of the upper body (head and shoulders) to the space of robot control signals. Its uses the principal component analysis algorithm for dimensionality reduction. The results demonstrate that combining the three methods presented above, the user can control robot positions with head and shoulders movements, while also adapting the robot’s impedance depending on its muscle activation. In the future work the performance of this system is going to be tested in patients with severe movement impairments

    Study and development of sensorimotor interfaces for robotic human augmentation

    Get PDF
    This thesis presents my research contribution to robotics and haptics in the context of human augmentation. In particular, in this document, we are interested in bodily or sensorimotor augmentation, thus the augmentation of humans by supernumerary robotic limbs (SRL). The field of sensorimotor augmentation is new in robotics and thanks to the combination with neuroscience, great leaps forward have already been made in the past 10 years. All of the research work I produced during my Ph.D. focused on the development and study of fundamental technology for human augmentation by robotics: the sensorimotor interface. This new concept is born to indicate a wearable device which has two main purposes, the first is to extract the input generated by the movement of the user's body, and the second to provide the somatosensory system of the user with an haptic feedback. This thesis starts with an exploratory study of integration between robotic and haptic devices, intending to combine state-of-the-art devices. This allowed us to realize that we still need to understand how to improve the interface that will allow us to feel the agency when using an augmentative robot. At this point, the path of this thesis forks into two alternative ways that have been adopted to improve the interaction between the human and the robot. In this regard, the first path we presented tackles two aspects conerning the haptic feedback of sensorimotor interfaces, which are the choice of the positioning and the effectiveness of the discrete haptic feedback. In the second way we attempted to lighten a supernumerary finger, focusing on the agility of use and the lightness of the device. One of the main findings of this thesis is that haptic feedback is considered to be helpful by stroke patients, but this does not mitigate the fact that the cumbersomeness of the devices is a deterrent to their use. Preliminary results here presented show that both the path we chose to improve sensorimotor augmentation worked: the presence of the haptic feedback improves the performance of sensorimotor interfaces, the co-positioning of haptic feedback and the input taken from the human body can improve the effectiveness of these interfaces, and creating a lightweight version of a SRL is a viable solution for recovering the grasping function

    Intramuscular EMG-driven Musculoskeletal Modelling: Towards Implanted Muscle Interfacing in Spinal Cord Injury Patients

    Get PDF
    Objective: Surface EMG-driven modelling has been proposed as a means to control assistive devices by estimating joint torques. Implanted EMG sensors have several advantages over wearable sensors but provide a more localized information on muscle activity, which may impact torque estimates. Here, we tested and compared the use of surface and intramuscular EMG measurements for the estimation of required assistive joint torques using EMG driven modelling. Methods: Four healthy subjects and three incomplete spinal cord injury (SCI) patients performed walking trials at varying speeds. Motion capture marker trajectories, surface and intramuscular EMG, and ground reaction forces were measured concurrently. Subject-specific musculoskeletal models were developed for all subjects, and inverse dynamics analysis was performed for all individual trials. EMG-driven modelling based joint torque estimates were obtained from surface and intramuscular EMG. Results: The correlation between the experimental and predicted joint torques was similar when using intramuscular or surface EMG as input to the EMG-driven modelling estimator in both healthy individuals and patients. Conclusion: We have provided the first comparison of non-invasive and implanted EMG sensors as input signals for torque estimates in healthy individuals and SCI patients. Significance: Implanted EMG sensors have the potential to be used as a reliable input for assistive exoskeleton joint torque actuation
    • 

    corecore