243 research outputs found
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
JNER at 15 years: analysis of the state of neuroengineering and rehabilitation.
On JNER's 15th anniversary, this editorial analyzes the state of the field of neuroengineering and rehabilitation. I first discuss some ways that the nature of neurorehabilitation research has evolved in the past 15 years based on my perspective as editor-in-chief of JNER and a researcher in the field. I highlight increasing reliance on advanced technologies, improved rigor and openness of research, and three, related, new paradigms - wearable devices, the Cybathlon competition, and human augmentation studies - indicators that neurorehabilitation is squarely in the age of wearability. Then, I briefly speculate on how the field might make progress going forward, highlighting the need for new models of training and learning driven by big data, better personalization and targeting, and an increase in the quantity and quality of usability and uptake studies to improve translation
Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots
This paper shows and evaluates a novel approach to integrate a non-invasive
Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to
mentally drive a telepresence robot. Controlling a mobile device by using human
brain signals might improve the quality of life of people suffering from severe
physical disabilities or elderly people who cannot move anymore. Thus, the BCI
user is able to actively interact with relatives and friends located in
different rooms thanks to a video streaming connection to the robot. To
facilitate the control of the robot via BCI, we explore new ROS-based
algorithms for navigation and obstacle avoidance, making the system safer and
more reliable. In this regard, the robot can exploit two maps of the
environment, one for localization and one for navigation, and both can be used
also by the BCI user to watch the position of the robot while it is moving. As
demonstrated by the experimental results, the user's cognitive workload is
reduced, decreasing the number of commands necessary to complete the task and
helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference
on Robotics and Automatio
A survey on bio-signal analysis for human-robot interaction
The use of bio-signals analysis in human-robot interaction is rapidly increasing. There is an urgent demand for it in various applications, including health care, rehabilitation, research, technology, and manufacturing. Despite several state-of-the-art bio-signals analyses in human-robot interaction (HRI) research, it is unclear which one is the best. In this paper, the following topics will be discussed: robotic systems should be given priority in the rehabilitation and aid of amputees and disabled people; second, domains of feature extraction approaches now in use, which are divided into three main sections (time, frequency, and time-frequency). The various domains will be discussed, then a discussion of each domain's benefits and drawbacks, and finally, a recommendation for a new strategy for robotic systems
Autonomous Grasping of 3-D Objects by a Vision-Actuated Robot Arm using Brain-Computer Interface
A major drawback of a Brain–Computer Interface-based robotic manipulation is the complex trajectory planning of the robot arm to be carried out by the user for reaching and grasping an object. The present paper proposes an intelligent solution to the existing problem by incorporating a novel Convolutional Neural Network (CNN)-based grasp detection network that enables the robot to reach and grasp the desired object (including overlapping objects) autonomously using a RGB-D camera. This network uses a simultaneous object and grasp detection to affiliate each estimated grasp with its corresponding object. The subject uses motor imagery brain signals to control the pan and tilt angle of a RGB-D camera mounted on a robot link to bring the desired object inside its Field-of-view presented through a display screen while the objects appearing on the screen are selected using the P300 brain pattern. The robot uses inverse kinematics along with the RGB-D camera information to autonomously reach the selected object and the object is grasped using proposed grasping strategy. The overall BCI system outperforms other comparative systems involving manual trajectory planning significantly. The overall accuracy, steady-state error, and settling time of the proposed system are 93.4%, 0.05%, and 15.92 s, respectively. The system also shows a significant reduction of the workload of the operating subjects in comparison to manual trajectory planning based approaches for reaching and grasping
Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses
Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses
- …