684 research outputs found
Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges
In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices
In-home and remote use of robotic body surrogates by people with profound motor deficits
By controlling robots comparable to the human body, people with profound
motor deficits could potentially perform a variety of physical tasks for
themselves, improving their quality of life. The extent to which this is
achievable has been unclear due to the lack of suitable interfaces by which to
control robotic body surrogates and a dearth of studies involving substantial
numbers of people with profound motor deficits. We developed a novel, web-based
augmented reality interface that enables people with profound motor deficits to
remotely control a PR2 mobile manipulator from Willow Garage, which is a
human-scale, wheeled robot with two arms. We then conducted two studies to
investigate the use of robotic body surrogates. In the first study, 15 novice
users with profound motor deficits from across the United States controlled a
PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a
simulated self-care task. Participants achieved clinically meaningful
improvements on the ARAT and 12 of 15 participants (80%) successfully completed
the simulated self-care task. Participants agreed that the robotic system was
easy to use, was useful, and would provide a meaningful improvement in their
lives. In the second study, one expert user with profound motor deficits had
free use of a PR2 in his home for seven days. He performed a variety of
self-care and household tasks, and also used the robot in novel ways. Taking
both studies together, our results suggest that people with profound motor
deficits can improve their quality of life using robotic body surrogates, and
that they can gain benefit with only low-level robot autonomy and without
invasive interfaces. However, methods to reduce the rate of errors and increase
operational speed merit further investigation.Comment: 43 Pages, 13 Figure
Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu:A Proof-of-Concept
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed
Defining brain–machine interface applications by matching interface performance with device requirements
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications. © 2007 Elsevier B.V. All rights reserved
Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots
This paper shows and evaluates a novel approach to integrate a non-invasive
Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to
mentally drive a telepresence robot. Controlling a mobile device by using human
brain signals might improve the quality of life of people suffering from severe
physical disabilities or elderly people who cannot move anymore. Thus, the BCI
user is able to actively interact with relatives and friends located in
different rooms thanks to a video streaming connection to the robot. To
facilitate the control of the robot via BCI, we explore new ROS-based
algorithms for navigation and obstacle avoidance, making the system safer and
more reliable. In this regard, the robot can exploit two maps of the
environment, one for localization and one for navigation, and both can be used
also by the BCI user to watch the position of the robot while it is moving. As
demonstrated by the experimental results, the user's cognitive workload is
reduced, decreasing the number of commands necessary to complete the task and
helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference
on Robotics and Automatio
Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm
[EN] Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Staubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Staubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.Funding for open access charge: Universitat Politecnica de Valencia.Quiles Cucarella, E.; Dadone, J.; Chio, N.; García Moreno, E. (2022). Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. Sensors. 22(13):1-26. https://doi.org/10.3390/s22135000126221
Chapter BCI Integration: Application Interfaces
Natural disaster
Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review
A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance
Co-adaptive control strategies in assistive Brain-Machine Interfaces
A large number of people with severe motor disabilities cannot access any of the
available control inputs of current assistive products, which typically rely on residual
motor functions. These patients are therefore unable to fully benefit from existent
assistive technologies, including communication interfaces and assistive robotics. In
this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a
potential non-invasive solution to exploit a non-muscular channel for communication
and control of assistive robotic devices, such as a wheelchair, a telepresence
robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations,
such as lack of precision, robustness and comfort, which prevent their practical
implementation in assistive technologies.
The goal of this PhD research is to produce scientific and technical developments
to advance the state of the art of assistive interfaces and service robotics based on
BMI paradigms. Two main research paths to the design of effective control strategies
were considered in this project. The first one is the design of hybrid systems, based on
the combination of the BMI together with gaze control, which is a long-lasting motor
function in many paralyzed patients. Such approach allows to increase the degrees
of freedom available for the control. The second approach consists in the inclusion
of adaptive techniques into the BMI design. This allows to transform robotic tools and
devices into active assistants able to co-evolve with the user, and learn new rules of
behavior to solve tasks, rather than passively executing external commands.
Following these strategies, the contributions of this work can be categorized
based on the typology of mental signal exploited for the control. These include:
1) the use of active signals for the development and implementation of hybrid eyetracking
and BMI control policies, for both communication and control of robotic
systems; 2) the exploitation of passive mental processes to increase the adaptability
of an autonomous controller to the user\u2019s intention and psychophysiological state,
in a reinforcement learning framework; 3) the integration of brain active and passive
control signals, to achieve adaptation within the BMI architecture at the level of
feature extraction and classification
- …