12 research outputs found

    Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review

    Get PDF
    A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance

    Controlling a Mouse Pointer with a Single-Channel EEG Sensor

    Get PDF
    Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointe

    A robotic arm control system with simultaneous and sequential modes combining eye-tracking with steady-state visual evoked potential in virtual reality environment

    Get PDF
    At present, single-modal brain-computer interface (BCI) still has limitations in practical application, such as low flexibility, poor autonomy, and easy fatigue for subjects. This study developed an asynchronous robotic arm control system based on steady-state visual evoked potentials (SSVEP) and eye-tracking in virtual reality (VR) environment, including simultaneous and sequential modes. For simultaneous mode, target classification was realized by decision-level fusion of electroencephalography (EEG) and eye-gaze. The stimulus duration for each subject was non-fixed, which was determined by an adjustable window method. Subjects could autonomously control the start and stop of the system using triple blink and eye closure, respectively. For sequential mode, no calibration was conducted before operation. First, subjects’ gaze area was obtained through eye-gaze, and then only few stimulus blocks began to flicker. Next, target classification was determined using EEG. Additionally, subjects could reject false triggering commands using eye closure. In this study, the system effectiveness was verified through offline experiment and online robotic-arm grasping experiment. Twenty subjects participated in offline experiment. For simultaneous mode, average ACC and ITR at the stimulus duration of 0.9 s were 90.50% and 60.02 bits/min, respectively. For sequential mode, average ACC and ITR at the stimulus duration of 1.4 s were 90.47% and 45.38 bits/min, respectively. Fifteen subjects successfully completed the online tasks of grabbing balls in both modes, and most subjects preferred the sequential mode. The proposed hybrid brain-computer interface (h-BCI) system could increase autonomy, reduce visual fatigue, meet individual needs, and improve the efficiency of the system

    On the Relative Contribution of Deep Convolutional Neural Networks for SSVEP-based Bio-Signal Decoding in BCI Speller Applications

    Get PDF
    Brain-computer interfaces (BCI) harnessing Steady State Visual Evoked Potentials (SSVEP) manipulate the frequency and phase of visual stimuli to generate predictable oscillations in neural activity. For BCI spellers, oscillations are matched with alphanumeric characters allowing users to select target numbers and letters. Advances in BCI spellers can, in part, be accredited to subject-speci?c optimization, including; 1) custom electrode arrangements, 2) ?lter sub-band assessments and 3) stimulus parameter tuning. Here we apply deep convolutional neural networks (DCNN) demonstrating cross-subject functionality for the classi?cation of frequency and phase encoded SSVEP. Electroencephalogram (EEG) data are collected and classi?ed using the same parameters across subjects. Subjects ?xate forty randomly cued ?ickering characters (5 ×8 keyboard array) during concurrent wet-EEG acquisition. These data are provided by an open source SSVEP dataset. Our proposed DCNN, PodNet, achieves 86% and 77% of?ine Accuracy of Classi?cation across-subjects for two data capture periods, respectively, 6-seconds (information transfer rate= 40bpm) and 2-seconds (information transfer rate= 101bpm). Subjects demonstrating sub-optimal (< 70%) performance are classi?ed to similar levels after a short subject-speci?c training period. PodNet outperforms ?lter-bank canonical correlation analysis (FBCCA) for a low volume (3channel) clinically feasible occipital electrode con?guration. The networks de?ned in this study achieve functional performance for the largest number of SSVEP classes decoded via DCNN to date. Our results demonstrate PodNet achieves cross-subject, calibrationless classi?cation and adaptability to sub-optimal subject data and low-volume EEG electrode arrangements

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification

    A Feasibility Study of Robot-Assisted Ankle Training Triggered by Combination of SSVEP Recognition and Motion Characteristics

    Get PDF
    In order to inspire subjects exerting more energy and pay more attention to SSVEP-based ankle training, this study introduce motion intention detection both in the first half cycle of single trainings and at the beginning of the training. This study also propose a novel method to recognize motion intention of subjects through merging the motion characteristics of the ankle training into the identification of SSVEP signals. Five healthy subjects participate in the training, and all can accomplish the training with the success rate of more than 80%. The proposed hybrid method can increase success rate from 50% to 80% comparing with the identification of SSVEP signals

    A Novel Approach Of Independent Brain-computer Interface Based On SSVEP

    Get PDF
    Durante os Ășltimos dez anos, as Interfaces CĂ©rebro Computador (ICC) baseadas em Potenciais Evocados Visuais de Regime Permanente (SSVEP) tĂȘm chamado a atenção de muitos pesquisadores devido aos resultados promissores e as altas taxas de precisĂŁo atingidas. Este tipo de ICC permite que pessoas com dificuldades motoras severas possam se comunicar com o mundo exterior atravĂ©s da modulação da atenção visual a luzes piscantes com frequĂȘncia determinada. Esta Tese de Doutorado tem o intuito de desenvolver um novo enfoque dentro das chamadas ICC Independentes, nas quais os usuĂĄrios nĂŁo necessitam executar tarefas neuromusculares para seleção visual de objetivos especĂ­ficos, caracterĂ­stica que a distingue das tradicionais ICCs-SSVEP. Assim, pessoas com difculdades motoras severas, como pessoas com Esclerose Lateral AmiotrĂłfca (ELA), contam com uma nova alternativa de se comunicar atravĂ©s de sinais cerebrais. Diversas contribuiçÔes foram realizadas neste trabalho, como, por exemplo, melhoria do algoritmo extrator de caracterĂ­sticas, denominado Índice de Sincronização MultivariĂĄvel (ou MSI, do InglĂȘs), para a detecção de potenciais evocados; desenvolvimento de um novo mĂ©todo de detecção de potenciais evocados atravĂ©s da correlação entre modelos multidimensionais (tensores); o desenvolvimento do primeiro estudo sobre a in&#64258;uĂȘncia de estĂ­mulos coloridos na detecção de SSVEPs usando LEDs; a aplicação do conceito de CompressĂŁo na detecção de SSVEPs; e, fnalmente, o desenvolvimento de uma nova ICC independente que utiliza o enfoque de Percepção Fundo-Figura (ou FGP, do InglĂȘs)
    corecore