812 research outputs found

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Full text link
    [EN] Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 +/- 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.This study was funded by the European Commission under the project AIDE (G.A. no: 645322), Spanish Ministry of Science and Innovation, through the projects PID2019-108310RB-I00 and PLEC2022-009424 and by the Ministry of Universities and European Union, "fnanced by European Union-Next Generation EU" through Margarita Salas grant for the training of young doctors.CatalĂĄn, JM.; Trigili, E.; Nann, M.; Blanco-Ivorra, A.; Lauretti, C.; Cordella, F.; Ivorra, E.... (2023). Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). Journal of NeuroEngineering and Rehabilitation. 20(1):1-16. https://doi.org/10.1186/s12984-023-01185-w11620

    Movement Onset Detection and Target Estimation for Robot-Aided Arm Training

    Get PDF
    This paper presents a motion intention estimation algorithm that is based on the recordings of joint torques, joint positions, electromyography, eye tracking and contextual information. It is intended to be used to support a virtual-reality-based robotic arm rehabilitation training. The algorithm first detects the onset of a reaching motion using joint torques and electromyography. It then predicts the motion target using a combination of eye tracking and context, and activates robotic assistance toward the target. The algorithm was first validated offline with 12 healthy subjects, then in a real-time robot control setting with 3 healthy subjects. In offline crossvalidation, onset was detected using torques and electromyography 116 ms prior to detectable changes in joint positions. Furthermore, it was possible to successfully predict a majority of motion targets, with the accuracy increasing over the course of the motion. Results were slightly worse in online validation, but nonetheless show great potential for real-time use with stroke patients

    Pupil Position by an Improved Technique of YOLO Network for Eye Tracking Application

    Get PDF
    This Eye gaze following is the real-time collection of information about a person's eye movements and the direction of their look. Eye gaze trackers are devices that measure the locations of the pupils to detect and track changes in the direction of the user's gaze. There are numerous applications for analyzing eye movements, from psychological studies to human-computer interaction-based systems and interactive robotics controls. Real-time eye gaze monitoring requires an accurate and reliable iris center localization technique. Deep learning technology is used to construct a pupil tracking approach for wearable eye trackers in this study. This pupil tracking method uses deep-learning You Only Look Once (YOLO) model to accurately estimate and anticipate the pupil's central location under conditions of bright, natural light (visible to the naked eye). Testing pupil tracking performance with the upgraded YOLOv7 results in an accuracy rate of 98.50% and a precision rate close to 96.34% using PyTorch

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification

    A Framework for Controlling Wheelchair Motion by using Gaze Information

    Get PDF
    Users with severe motor ability are unable to control their wheelchair using standard joystick and hence an alternative control input is preferred. In this paper a method on how to enable the severe impairment user to control a wheelchair via gaze information is proposed. Since when using such an input, the navigation burden for the user is significantly increased, an assistive navigation platform is also proposed to reduce the user burden. Initially, user information is inferred using a camera and a bite-like switch. Then information from the environment is obtained using combination of laser and Kinect sensors. Eventually, both information from the environment and the user is analyzed to decide the final control operation that according to the user intention and safe from collision. Experimental results demonstrate the feasibility of the proposed approach

    Upper-limb Kinematic Analysis and Artificial Intelligent Techniques for Neurorehabilitation and Assistive Environments

    Get PDF
    Stroke, one of the leading causes of death and disability around the world, usually affects the motor cortex causing weakness or paralysis in the limbs of one side of the body. Research efforts in neurorehabilitation technology have focused on the development of robotic devices to restore motor and cognitive function in impaired individuals, having the potential to deliver high-intensity and motivating therapy. End-effector-based devices have become an usual tool in the upper- limb neurorehabilitation due to the ease of adapting to patients. However, they are unable to measure the joint movements during the exercise. Thus, the first part of this thesis is focused on the development of a kinematic reconstruction algorithm that can be used in a real rehabilitation environment, without disturbing the normal patient-clinician interaction. On the basis of the algorithm found in the literature that presents some instabilities, a new algorithm is developed. The proposed algorithm is the first one able to online estimate not only the upper-limb joints, but also the trunk compensation using only two non-invasive wearable devices, placed onto the shoulder and upper arm of the patient. This new tool will allow the therapist to perform a comprehensive assessment combining the range of movement with clinical assessment scales. Knowing that the intensity of the therapy improves the outcomes of neurorehabilitation, a ‘self-managed’ rehabilitation system can allow the patients to continue the rehabilitation at home. This thesis proposes a system to online measure a set of upper-limb rehabilitation gestures, and intelligently evaluates the quality of the exercise performed by the patients. The assessment is performed through the study of the performed movement as a whole as well as evaluating each joint independently. The first results are promising and suggest that this system can became a a new tool to complement the clinical therapy at home and improve the rehabilitation outcomes. Finally, severe motor condition can remain after rehabilitation process. Thus, a technology solution for these patients and people with severe motor disabilities is proposed. An intelligent environmental control interface is developed with the ability to adapt its scan control to the residual capabilities of the user. Furthermore, the system estimates the intention of the user from the environmental information and the behavior of the user, helping in the navigation through the interface, improving its independence at home.El accidente cerebrovascular o ictus es una de las causas principales de muerte y discapacidad a nivel mundial. Normalmente afecta a la corteza motora causando debilidad o parĂĄlisis en las articulaciones del mismo lado del cuerpo. Los esfuerzos de investigaciĂłn dentro de la tecnologĂ­a de neurorehabilitaciĂłn se han centrado en el desarrollo de dispositivos robĂłticos para restaurar las funciones motoras y cognitivas en las personas con esta discapacidad, teniendo un gran potencial para ofrecer una terapia de alta intensidad y motivadora. Los dispositivos basados en efector final se han convertido en una herramienta habitual en la neurorehabilitaciĂłn de miembro superior ya que es muy sencillo adaptarlo a los pacientes. Sin embargo, Ă©stos no son capaces de medir los movimientos articulares durante la realizaciĂłn del ejercicio. Por tanto, la primera parte de esta tesis se centra en el desarrollo de un algoritmo de reconstrucciĂłn cinemĂĄtica que pueda ser usado en un entorno de rehabilitaciĂłn real, sin perjudicar a la interacciĂłn normal entre el paciente y el clĂ­nico. Partiendo de la base que propone el algoritmo encontrado en la literatura, el cual presenta algunas inestabilidades, se ha desarrollado un nuevo algoritmo. El algoritmo propuesto es el primero capaz de estimar en tiempo real no sĂłlo las articulaciones del miembro superior, sino tambiĂ©n la compensaciĂłn del tronco usando solamente dos dispositivos no invasivos y portĂĄtiles, colocados sobre el hombro y el brazo del paciente. Esta nueva herramienta permite al terapeuta realizar una valoraciĂłn mĂĄs exhaustiva combinando el rango de movimiento con las escalas de valoraciĂłn clĂ­nicas. Sabiendo que la intensidad de la terapia mejora los resultados de la recuperaciĂłn del ictus, un sistema de rehabilitaciĂłn ‘auto-gestionado’ permite a los pacientes continuar con la rehabilitaciĂłn en casa. Esta tesis propone un sistema para medir en tiempo real un conjunto de gestos de miembro superior y evaluar de manera inteligente la calidad del ejercicio realizado por el paciente. La valoraciĂłn se hace a travĂ©s del estudio del movimiento ejecutado en su conjunto, asĂ­ como evaluando cada articulaciĂłn independientemente. Los primeros resultados son prometedores y apuntan a que este sistema puede convertirse en una nueva herramienta para complementar la terapia clĂ­nica en casa y mejorar los resultados de la rehabilitaciĂłn. Finalmente, despuĂ©s del proceso de rehabilitaciĂłn pueden quedar secuelas motoras graves. Por este motivo, se propone una soluciĂłn tecnolĂłgica para estas personas y para personas con discapacidades motoras severas. AsĂ­, se ha desarrollado una interfaz de control de entorno inteligente capaz de adaptar su control a las capacidades residuales del usuario. AdemĂĄs, el sistema estima la intenciĂłn del usuario a partir de la informaciĂłn del entorno y el comportamiento del usuario, ayudando en la navegaciĂłn a travĂ©s de la interfaz, mejorando su independencia en el hogar

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    A multi-modal perception based assistive robotic system for the elderly

    Get PDF
    Edited by Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan TrivediInternational audienceIn this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants
    • 

    corecore