743 research outputs found

    Considerations for the Control Design of Augmentative Robots

    Full text link
    Robotic systems that are intended to augment human capabilities commonly require the use of semi-autonomous control and artificial sensing, while at the same time aiming to empower the user to make decisions and take actions. This work identifies principles and techniques from the literature that can help to resolve this apparent contradiction. It is postulated that augmentative robots must function as tools that have partial agency, as collaborative agents that provide conditional transparency, and ideally, serve as extensions of the human body.Comment: 7 pages. Presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) Workshop on Building and Evaluating Ethical Robotic Systems, Prague, Czech Republic, 28-30 September 202

    Diseño de entornos de realidad virtual aplicables a sistemas de robótica asistencial: un análisis literario

    Get PDF
    Virtual Reality (VR) environments can be applied to assistive robotics to improve the effectiveness and the user experience perception in the rehabilitation process due to its innovative nature, getting to entertain patients while they recover their motor functions. This literature review pretends to analyze some design principles of VR environments developed for upper limb rehabilitation processes. The idea is to identify features related to peripheral and central nervous systems, types of information included as feedback to increase the user's levels of immersion having a positive impact on the user's performance and experience during the treatment. A total of 32 articles published in Scopus, IEEE, PubMed, and Web of Science in the last four years were reviewed. We present the article selection process, the division by concepts presented previously, and the guidelines that can be considered for the design of VR environments applicable to assistive robots for upper limbs rehabilitation processes.Los entornos de Realidad Virtual (RV) aplicables a sistemas de robótica asistencial pueden ser diseñados de manera que mejoren la efectividad y la experiencia de usuario de los procesos de rehabilitación debido a su naturaleza novedosa, logrando entretener a los pacientes mientras recuperan sus funciones motoras. Esta revisión literaria pretende analizar los criterios de diseño de entornos de RV utilizados en procesos de rehabilitación de miembro superior, identificando las características de entornos para rehabilitación de problemas asociados el sistema nervioso central y periféricos, los tipos de información que se realimenta al usuario para beneficiar los niveles de inmersión y su impacto en términos del desempeño y la experiencia del usuario en tratamiento. Un total de 32 artículos publicados en revistas indexadas de Scopus, IEEE, PubMed y Web of Science en los últimos cuatro años fueron revisados. Se presenta el proceso de selección de artículos, la división por las temáticas presentadas anteriormente y los lineamientos generales que pueden ser considerados para el diseño de entornos de RV aplicables a robots asistenciales en procesos de rehabilitación de miembro superior

    MUNDUS project : MUltimodal neuroprosthesis for daily upper limb support

    Get PDF
    Background: MUNDUS is an assistive framework for recovering direct interaction capability of severely motor impaired people based on arm reaching and hand functions. It aims at achieving personalization, modularity and maximization of the user’s direct involvement in assistive systems. To this, MUNDUS exploits any residual control of the end-user and can be adapted to the level of severity or to the progression of the disease allowing the user to voluntarily interact with the environment. MUNDUS target pathologies are high-level spinal cord injury (SCI) and neurodegenerative and genetic neuromuscular diseases, such as amyotrophic lateral sclerosis, Friedreich ataxia, and multiple sclerosis (MS). The system can be alternatively driven by residual voluntary muscular activation, head/eye motion, and brain signals. MUNDUS modularly combines an antigravity lightweight and non-cumbersome exoskeleton, closed-loop controlled Neuromuscular Electrical Stimulation for arm and hand motion, and potentially a motorized hand orthosis, for grasping interactive objects. Methods: The definition of the requirements and of the interaction tasks were designed by a focus group with experts and a questionnaire with 36 potential end-users. Five end-users (3 SCI and 2 MS) tested the system in the configuration suitable to their specific level of impairment. They performed two exemplary tasks: reaching different points in the working volume and drinking. Three experts evaluated over a 3-level score (from 0, unsuccessful, to 2, completely functional) the execution of each assisted sub-action. Results: The functionality of all modules has been successfully demonstrated. User’s intention was detected with a 100% success. Averaging all subjects and tasks, the minimum evaluation score obtained was 1.13 ± 0.99 for the release of the handle during the drinking task, whilst all the other sub-actions achieved a mean value above 1.6. All users, but one, subjectively perceived the usefulness of the assistance and could easily control the system. Donning time ranged from 6 to 65 minutes, scaled on the configuration complexity. Conclusions: The MUNDUS platform provides functional assistance to daily life activities; the modules integration depends on the user’s need, the functionality of the system have been demonstrated for all the possible configurations, and preliminary assessment of usability and acceptance is promising

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Usability of Upper Limb Electromyogram Features as Muscle Fatigue Indicators for Better Adaptation of Human-Robot Interactions

    Get PDF
    Human-robot interaction (HRI) is the process of humans and robots working together to accomplish a goal with the objective of making the interaction beneficial to humans. Closed loop control and adaptability to individuals are some of the important acceptance criteria for human-robot interaction systems. While designing an HRI interaction scheme, it is important to understand the users of the system and evaluate the capabilities of humans and robots. An acceptable HRI solution is expected to be adaptable by detecting and responding to the changes in the environment and its users. Hence, an adaptive robotic interaction will require a better sensing of the human performance parameters. Human performance is influenced by the state of muscular and mental fatigue during active interactions. Researchers in the field of human-robot interaction have been trying to improve the adaptability of the environment according to the physical state of the human participants. Existing human-robot interactions and robot assisted trainings are designed without sufficiently considering the implications of fatigue to the users. Given this, identifying if better outcome can be achieved during a robot-assisted training by adapting to individual muscular status, i.e. with respect to fatigue, is a novel area of research. This has potential applications in scenarios such as rehabilitation robotics. Since robots have the potential to deliver a large number of repetitions, they can be used for training stroke patients to improve their muscular disabilities through repetitive training exercises. The objective of this research is to explore a solution for a longer and less fatiguing robot-assisted interaction, which can adapt based on the muscular state of participants using fatigue indicators derived from electromyogram (EMG) measurements. In the initial part of this research, fatigue indicators from upper limb muscles of healthy participants were identified by analysing the electromyogram signals from the muscles as well as the kinematic data collected by the robot. The tasks were defined to have point-to-point upper limb movements, which involved dynamic muscle contractions, while interacting with the HapticMaster robot. The study revealed quantitatively, which muscles were involved in the exercise and which muscles were more fatigued. The results also indicated the potential of EMG and kinematic parameters to be used as fatigue indicators. A correlation analysis between EMG features and kinematic parameters revealed that the correlation coefficient was impacted by muscle fatigue. As an extension of this study, the EMG collected at the beginning of the task was also used to predict the type of point-to-point movements using a supervised machine learning algorithm based on Support Vector Machines. The results showed that the movement intention could be detected with a reasonably good accuracy within the initial milliseconds of the task. The final part of the research implemented a fatigue-adaptive algorithm based on the identified EMG features. An experiment was conducted with thirty healthy participants to test the effectiveness of this adaptive algorithm. The participants interacted with the HapticMaster robot following a progressive muscle strength training protocol similar to a standard sports science protocol for muscle strengthening. The robotic assistance was altered according to the muscular state of participants, and, thus, offering varying difficulty levels based on the states of fatigue or relaxation, while performing the tasks. The results showed that the fatigue-based robotic adaptation has resulted in a prolonged training interaction, that involved many repetitions of the task. This study showed that using fatigue indicators, it is possible to alter the level of challenge, and thus, increase the interaction time. In summary, the research undertaken during this PhD has successfully enhanced the adaptability of human-robot interaction. Apart from its potential use for muscle strength training in healthy individuals, the work presented in this thesis is applicable in a wide-range of humanmachine interaction research such as rehabilitation robotics. This has a potential application in robot-assisted upper limb rehabilitation training of stroke patients

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure
    corecore