4,853 research outputs found

    Design and Development of an Affordable Haptic Robot with Force-Feedback and Compliant Actuation to Improve Therapy for Patients with Severe Hemiparesis

    Get PDF
    The study describes the design and development of a single degree-of-freedom haptic robot, Haptic Theradrive, for post-stroke arm rehabilitation for in-home and clinical use. The robot overcomes many of the weaknesses of its predecessor, the TheraDrive system, that used a Logitech steering wheel as the haptic interface for rehabilitation. Although the original TheraDrive system showed success in a pilot study, its wheel was not able to withstand the rigors of use. A new haptic robot was developed that functions as a drop-in replacement for the Logitech wheel. The new robot can apply larger forces in interacting with the patient, thereby extending the functionality of the system to accommodate low-functioning patients. A new software suite offers appreciably more options for tailored and tuned rehabilitation therapies. In addition to describing the design of the hardware and software, the paper presents the results of simulation and experimental case studies examining the system\u27s performance and usability

    Trajectory Deformations from Physical Human-Robot Interaction

    Full text link
    Robots are finding new applications where physical interaction with a human is necessary: manufacturing, healthcare, and social tasks. Accordingly, the field of physical human-robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that---despite provisions for the human to modify the robot's current trajectory---the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Training modalities in robot-mediated upper limb rehabilitation in stroke : A framework for classification based on a systematic review

    Get PDF
    © 2014 Basteris et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The work described in this manuscript was partially funded by the European project ‘SCRIPT’ Grant agreement no: 288698 (http://scriptproject.eu). SN has been hosted at University of Hertfordshire in a short-term scientific mission funded by the COST Action TD1006 European Network on Robotics for NeuroRehabilitationRobot-mediated post-stroke therapy for the upper-extremity dates back to the 1990s. Since then, a number of robotic devices have become commercially available. There is clear evidence that robotic interventions improve upper limb motor scores and strength, but these improvements are often not transferred to performance of activities of daily living. We wish to better understand why. Our systematic review of 74 papers focuses on the targeted stage of recovery, the part of the limb trained, the different modalities used, and the effectiveness of each. The review shows that most of the studies so far focus on training of the proximal arm for chronic stroke patients. About the training modalities, studies typically refer to active, active-assisted and passive interaction. Robot-therapy in active assisted mode was associated with consistent improvements in arm function. More specifically, the use of HRI features stressing active contribution by the patient, such as EMG-modulated forces or a pushing force in combination with spring-damper guidance, may be beneficial.Our work also highlights that current literature frequently lacks information regarding the mechanism about the physical human-robot interaction (HRI). It is often unclear how the different modalities are implemented by different research groups (using different robots and platforms). In order to have a better and more reliable evidence of usefulness for these technologies, it is recommended that the HRI is better described and documented so that work of various teams can be considered in the same group and categories, allowing to infer for more suitable approaches. We propose a framework for categorisation of HRI modalities and features that will allow comparing their therapeutic benefits.Peer reviewedFinal Published versio

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing

    Get PDF
    Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation

    Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload

    Get PDF
    Powered wheelchair users often struggle to drive safely and effectively and in more critical cases can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists the user as and when they require help. The system uses a multiple–hypotheses method to predict the driver’s intentions and if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance, but, perhaps more importantly, we characterise the user performance, in an experiment that combines eye–tracking with a secondary task. Without assistance, participants experienced multiple collisions whilst driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely, but they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain–machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input

    Brain computer interface based robotic rehabilitation with online modification of task speed

    Get PDF
    We present a systematic approach that enables online modification/adaptation of robot assisted rehabilitation exercises by continuously monitoring intention levels of patients utilizing an electroencephalogram (EEG) based Brain-Computer Interface (BCI). In particular, we use Linear Discriminant Analysis (LDA) to classify event-related synchronization (ERS) and desynchronization (ERD) patterns associated with motor imagery; however, instead of providing a binary classification output, we utilize posterior probabilities extracted from LDA classifier as the continuous-valued outputs to control a rehabilitation robot. Passive velocity field control (PVFC) is used as the underlying robot controller to map instantaneous levels of motor imagery during the movement to the speed of contour following tasks. In other words, PVFC changes the speed of contour following tasks with respect to intention levels of motor imagery. PVFC also allows decoupling of the task and the speed of the task from each other, and ensures coupled stability of the overall robot patient system. The proposed framework is implemented on AssistOn-Mobile - a series elastic actuator based on a holonomic mobile platform, and feasibility studies with healthy volunteers have been conducted test effectiveness of the proposed approach. Giving patients online control over the speed of the task, the proposed approach ensures active involvement of patients throughout exercise routines and has the potential to increase the efficacy of robot assisted therapies

    Feedback Control of an Exoskeleton for Paraplegics: Toward Robustly Stable Hands-free Dynamic Walking

    Get PDF
    This manuscript presents control of a high-DOF fully actuated lower-limb exoskeleton for paraplegic individuals. The key novelty is the ability for the user to walk without the use of crutches or other external means of stabilization. We harness the power of modern optimization techniques and supervised machine learning to develop a smooth feedback control policy that provides robust velocity regulation and perturbation rejection. Preliminary evaluation of the stability and robustness of the proposed approach is demonstrated through the Gazebo simulation environment. In addition, preliminary experimental results with (complete) paraplegic individuals are included for the previous version of the controller.Comment: Submitted to IEEE Control System Magazine. This version addresses reviewers' concerns about the robustness of the algorithm and the motivation for using such exoskeleton
    corecore