248 research outputs found

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Full text link
    [EN] Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 +/- 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.This study was funded by the European Commission under the project AIDE (G.A. no: 645322), Spanish Ministry of Science and Innovation, through the projects PID2019-108310RB-I00 and PLEC2022-009424 and by the Ministry of Universities and European Union, "fnanced by European Union-Next Generation EU" through Margarita Salas grant for the training of young doctors.CatalĂĄn, JM.; Trigili, E.; Nann, M.; Blanco-Ivorra, A.; Lauretti, C.; Cordella, F.; Ivorra, E.... (2023). Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). Journal of NeuroEngineering and Rehabilitation. 20(1):1-16. https://doi.org/10.1186/s12984-023-01185-w11620

    A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability

    Get PDF
    Robotics to support elderly people in living independently and to assist disabled people in carrying out the activities of daily living independently have demonstrated good results. Basically, there are two approaches: one of them is based on mobile robot assistants, such as Care-O-bot, PR2, and Tiago, among others; the other one is the use of an external robotic arm or a robotic exoskeleton fixed or mounted on a wheelchair. In this paper, a modular mobile robotic platform to assist moderately and severely impaired people based on an upper limb robotic exoskeleton mounted on a robotized wheel chair is presented. This mobile robotic platform can be customized for each user’s needs by exploiting its modularity. Finally, experimental results in a simulated home environment with a living room and a kitchen area, in order to simulate the interaction of the user with different elements of a home, are presented. In this experiment, a subject suffering from multiple sclerosis performed different activities of daily living (ADLs) using the platform in front of a group of clinicians composed of nurses, doctors, and occupational therapists. After that, the subject and the clinicians replied to a usability questionnaire. The results were quite good, but two key factors arose that need to be improved: the complexity and the cumbersome aspect of the platform.This work was supported by the AIDE project through Grant Agreement No. 645322 of the European Commission, by the Conselleria d’Educacio, Cultura i Esport of Generalitat Valenciana, by the European Social Fund—Investing in your future, through the grant ACIF 2018/214, and by the Promoción de empleo joven e implantación de garantía juvenil en I+D+I 2018 through the grant PEJ2018-002670-A

    Upper limb soft robotic wearable devices: a systematic review

    Get PDF
    Introduction: Soft robotic wearable devices, referred to as exosuits, can be a valid alternative to rigid exoskeletons when it comes to daily upper limb support. Indeed, their inherent flexibility improves comfort, usability, and portability while not constraining the user’s natural degrees of freedom. This review is meant to guide the reader in understanding the current approaches across all design and production steps that might be exploited when developing an upper limb robotic exosuit. Methods: The literature research regarding such devices was conducted in PubMed, Scopus, and Web of Science. The investigated features are the intended scenario, type of actuation, supported degrees of freedom, low-level control, high-level control with a focus on intention detection, technology readiness level, and type of experiments conducted to evaluate the device. Results: A total of 105 articles were collected, describing 69 different devices. Devices were grouped according to their actuation type. More than 80% of devices are meant either for rehabilitation, assistance, or both. The most exploited actuation types are pneumatic (52%) and DC motors with cable transmission (29%). Most devices actuate 1 (56%) or 2 (28%) degrees of freedom, and the most targeted joints are the elbow and the shoulder. Intention detection strategies are implemented in 33% of the suits and include the use of switches and buttons, IMUs, stretch and bending sensors, EMG and EEG measurements. Most devices (75%) score a technology readiness level of 4 or 5. Conclusion: Although few devices can be considered ready to reach the market, exosuits show very high potential for the assistance of daily activities. Clinical trials exploiting shared evaluation metrics are needed to assess the effectiveness of upper limb exosuits on target users

    A review on design of upper limb exoskeletons

    Get PDF

    Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators

    Get PDF
    This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented

    Single Lead EMG signal to Control an Upper Limb Exoskeleton Using Embedded Machine Learning on Raspberry Pi

    Get PDF
    Post-stroke can cause partial or complete paralysis of the human limb. Delayed rehabilitation steps in post-stroke patients can cause muscle atrophy and limb stiffness. Post-stroke patients require an upper limb exoskeleton device for the rehabilitation process. Several previous studies used more than one electrode lead to control the exoskeleton. The use of many electrode leads can lead to an increase in complexity in terms of hardware and software. Therefore, this research aims to develop single lead EMG pattern recognition to control an upper limb exoskeleton. The main contribution of this research is that the robotic upper limb exoskeleton device can be controlled using a single lead EMG. EMG signals were tapped at the biceps point with a sampling frequency of 2000 Hz. A Raspberry Pi 3B+ was used to embed the data acquisition, feature extraction, classification and motor control by using multithread algorithm. The exoskeleton arm frame is made using 3D printing technology using a high torque servo motor drive. The control process is carried out by extracting EMG signals using EMG features (mean absolute value, root mean square, variance) further extraction results will be trained on machine learning (decision tree (DT), linear regression (LR), polynomial regression (PR), and random forest (RF)). The results show that machine learning decision tree and random forest produce the highest accuracy compared to other classifiers. The accuracy of DT and RF are of 96.36±0.54% and 95.67±0.76%, respectively. Combining the EMG features, shows that there is no significant difference in accuracy (p-value 0.05). A single lead EMG electrode can control the upper limb exoskeleton robot device well

    Gait Cycle-Inspired Learning Strategy for Continuous Prediction of Knee Joint Trajectory from sEMG

    Full text link
    Predicting lower limb motion intent is vital for controlling exoskeleton robots and prosthetic limbs. Surface electromyography (sEMG) attracts increasing attention in recent years as it enables ahead-of-time prediction of motion intentions before actual movement. However, the estimation performance of human joint trajectory remains a challenging problem due to the inter- and intra-subject variations. The former is related to physiological differences (such as height and weight) and preferred walking patterns of individuals, while the latter is mainly caused by irregular and gait-irrelevant muscle activity. This paper proposes a model integrating two gait cycle-inspired learning strategies to mitigate the challenge for predicting human knee joint trajectory. The first strategy is to decouple knee joint angles into motion patterns and amplitudes former exhibit low variability while latter show high variability among individuals. By learning through separate network entities, the model manages to capture both the common and personalized gait features. In the second, muscle principal activation masks are extracted from gait cycles in a prolonged walk. These masks are used to filter out components unrelated to walking from raw sEMG and provide auxiliary guidance to capture more gait-related features. Experimental results indicate that our model could predict knee angles with the average root mean square error (RMSE) of 3.03(0.49) degrees and 50ms ahead of time. To our knowledge this is the best performance in relevant literatures that has been reported, with reduced RMSE by at least 9.5%

    A Brain-Controlled Exoskeleton with Cascaded Event-Related Desynchronization Classifiers

    Get PDF
    This paper describes a brain-machine interface for the online control of a powered lower-limb exoskeleton based on electroencephalogram (EEG) signals recorded over the user’s sensorimotor cortical areas. We train a binary decoder that can distinguish two different mental states, which is applied in a cascaded manner to efficiently control the exoskeleton in three different directions: walk front, turn left and turn right. This is realized by first classifying the user’s intention to walk front or change the direction. If the user decides to change the direction, a subsequent classification is performed to decide turn left or right. The user’s mental command is conditionally executed considering the possibility of obstacle collision. All five subjects were able to successfully complete the 3-way navigation task using brain signals while mounted in the exoskeleton. We observed on average 10.2% decrease in overall task completion time compared to the baseline protocol
    • 

    corecore