1,761 research outputs found
Induction of Neural Plasticity Using a Low-Cost Open Source Brain-Computer Interface and a 3D-Printed Wrist Exoskeleton
Brain-computer interfaces (BCIs) have been proven to be useful for stroke rehabilitation, but there are a number of factors that impede the use of this technology in rehabilitation clinics and in home-use, the major factors including the usability and costs of the BCI system. The aims of this study were to develop a cheap 3D-printed wrist exoskeleton that can be controlled by a cheap open source BCI (OpenViBE), and to determine if training with such a setup could induce neural plasticity. Eleven healthy volunteers imagined wrist extensions, which were detected from single-trial electroencephalography (EEG), and in response to this, the wrist exoskeleton replicated the intended movement. Motor-evoked potentials (MEPs) elicited using transcranial magnetic stimulation were measured before, immediately after, and 30 min after BCI training with the exoskeleton. The BCI system had a true positive rate of 86 ± 12% with 1.20 ± 0.57 false detections per minute. Compared to the measurement before the BCI training, the MEPs increased by 35 ± 60% immediately after and 67 ± 60% 30 min after the BCI training. There was no association between the BCI performance and the induction of plasticity. In conclusion, it is possible to detect imaginary movements using an open-source BCI setup and control a cheap 3D-printed exoskeleton that when combined with the BCI can induce neural plasticity. These findings may promote the availability of BCI technology for rehabilitation clinics and home-use. However, the usability must be improved, and further tests are needed with stroke patients
BCIAUT-P300: A Multi-Session and Multi-Subject Benchmark Dataset on Autism for P300-Based Brain-Computer-Interfaces
There is a lack of multi-session P300 datasets for Brain-Computer Interfaces (BCI).
Publicly available datasets are usually limited by small number of participants with few
BCI sessions. In this sense, the lack of large, comprehensive datasets with various
individuals and multiple sessions has limited advances in the development of more
effective data processing and analysis methods for BCI systems. This is particularly
evident to explore the feasibility of deep learning methods that require large datasets.
Here we present the BCIAUT-P300 dataset, containing 15 autism spectrum disorder
individuals undergoing 7 sessions of P300-based BCI joint-attention training, for a
total of 105 sessions. The dataset was used for the 2019 IFMBE Scientific Challenge
organized during MEDICON 2019 where, in two phases, teams from all over the world
tried to achieve the best possible object-detection accuracy based on the P300 signals.
This paper presents the characteristics of the dataset and the approaches followed by
the 9 finalist teams during the competition. The winner obtained an average accuracy
of 92.3% with a convolutional neural network based on EEGNet. The dataset is now
publicly released and stands as a benchmark for future P300-based BCI algorithms
based on multiple session data
On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks
A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy.
Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features.
The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world
P300, Steady State Visual Evoked Potentials, And Hybrid Paradigms For A Brain Computer Interface Speller
The goal of this research was to evaluate and compare two types of brain computer interface (BCI) systems, P300 and steady state visually evoked potentials (SSVEP), as spelling paradigms and combine them as a hybrid approach. There were pilot experiments performed for designing the parameters of the SSVEP spelling paradigm including peak detection for different range of frequencies, placement of LEDs, design of the SSVEP stimulus board, and window time for the SSVEP peak detection processing. The next experiment was to evaluate the SSVEP spelling paradigm. Six subjects participated in the task. The accuracy of each frequency and average accuracy for each subject were considered. The second experiment was designed to compare the performance and accuracy of SSVEP, P300, and the combination of both paradigms as a simultaneous task. Ten subjects were considered for performing this experiment. Overall the average accuracy of the SSVEP spelling paradigm was 80.00 % and higher than the P300 spelling paradigm average accuracy which was 72.50 %, and both of the spelling paradigms have better accuracy than the hybrid paradigm with the average accuracy of 64.39 %
Co-adaptive control strategies in assistive Brain-Machine Interfaces
A large number of people with severe motor disabilities cannot access any of the
available control inputs of current assistive products, which typically rely on residual
motor functions. These patients are therefore unable to fully benefit from existent
assistive technologies, including communication interfaces and assistive robotics. In
this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a
potential non-invasive solution to exploit a non-muscular channel for communication
and control of assistive robotic devices, such as a wheelchair, a telepresence
robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations,
such as lack of precision, robustness and comfort, which prevent their practical
implementation in assistive technologies.
The goal of this PhD research is to produce scientific and technical developments
to advance the state of the art of assistive interfaces and service robotics based on
BMI paradigms. Two main research paths to the design of effective control strategies
were considered in this project. The first one is the design of hybrid systems, based on
the combination of the BMI together with gaze control, which is a long-lasting motor
function in many paralyzed patients. Such approach allows to increase the degrees
of freedom available for the control. The second approach consists in the inclusion
of adaptive techniques into the BMI design. This allows to transform robotic tools and
devices into active assistants able to co-evolve with the user, and learn new rules of
behavior to solve tasks, rather than passively executing external commands.
Following these strategies, the contributions of this work can be categorized
based on the typology of mental signal exploited for the control. These include:
1) the use of active signals for the development and implementation of hybrid eyetracking
and BMI control policies, for both communication and control of robotic
systems; 2) the exploitation of passive mental processes to increase the adaptability
of an autonomous controller to the user\u2019s intention and psychophysiological state,
in a reinforcement learning framework; 3) the integration of brain active and passive
control signals, to achieve adaptation within the BMI architecture at the level of
feature extraction and classification
- …