159 research outputs found

    BCI-Based Navigation in Virtual and Real Environments

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables people to control an external device with their brain activity, without the need of any muscular activity. Researchers in the BCI field aim to develop applications to improve the quality of life of severely disabled patients, for whom a BCI can be a useful channel for interaction with their environment. Some of these systems are intended to control a mobile device (e. g. a wheelchair). Virtual Reality is a powerful tool that can provide the subjects with an opportunity to train and to test different applications in a safe environment. This technical review will focus on systems aimed at navigation, both in virtual and real environments.This work was partially supported by the Innovation, Science and Enterprise Council of the Junta de Andalucía (Spain), project P07-TIC-03310, the Spanish Ministry of Science and Innovation, project TEC 2011-26395 and by the European fund ERDF

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Discriminative methods for classification of asynchronous imaginary motor tasks from EEG data

    Get PDF
    In this work, two methods based on statistical models that take into account the temporal changes in the electroencephalographic (EEG) signal are proposed for asynchronous brain-computer interfaces (BCI) based on imaginary motor tasks. Unlike the current approaches to asynchronous BCI systems that make use of windowed versions of the EEG data combined with static classifiers, the methods proposed here are based on discriminative models that allow sequential labeling of data. In particular, the two methods we propose for asynchronous BCI are based on conditional random fields (CRFs) and latent dynamic CRFs (LDCRFs), respectively. We describe how the asynchronous BCI problem can be posed as a classification problem based on CRFs or LDCRFs, by defining appropriate random variables and their relationships. CRF allows modeling the extrinsic dynamics of data, making it possible to model the transitions between classes, which in this context correspond to distinct tasks in an asynchronous BCI system. On the other hand, LDCRF goes beyond this approach by incorporating latent variables that permit modeling the intrinsic structure for each class and at the same time allows modeling extrinsic dynamics. We apply our proposed methods on the publicly available BCI competition III dataset V as well as a data set recorded in our laboratory. Results obtained are compared to the top algorithm in the BCI competition as well as to methods based on hierarchical hidden Markov models (HHMMs), hierarchical hidden CRF (HHCRF), neural networks based on particle swarm optimization (IPSONN) and to a recently proposed approach based on neural networks and fuzzy theory, the S-dFasArt. Our experimental analysis demonstrates the improvements provided by our proposed methods in terms of classification accuracy

    Unsupervised Short-term Covariate Shift Minimization for Self-paced BCI

    Get PDF

    EEG-based brain-computer interface with visual and haptic feedback

    Get PDF
    Tehokas koehenkilöiden oppiminen palautteen avulla on tärkeää aivokäyttöliittymä tutkimuksessa. Suurimmassa osassa aiemmista tutkimuksista koehenkilöt ovat saaneet palautteen visuaalisena; toiset palautemodaliteetit voisivat paremmin palvella potilaita, joilla on näkövammoja ja käyttäjiä, jotka tarvitsevat näkökykyä muualla. Aiemmissa tutkimuksissa auditiivinen palaute oli merkittävästi huonompi koehenkilöiden opetuksessa kuin visuaalinen palaute. Haptinen (tunto) palaute voisi sopia paremmin aivokäyttöliittymille. Kuusi liikuntakykyistä, ensikertalaista koehenkilöä saivat haptista tai visuaalista palautetta tai molempia erillisissä sessioissa opetellessaan kaksiluokkaisen aivokäyttöliittymän hallintaa vasemman ja oikean käden kuvittelulla. Kokeita varten toteutettu TKK BCI komponentteineen kykenee reaaliaikaiseen signaalin mittaukseen, signaalien käsittelyyn, palautteen antamiseen ja sovellusten ohjaamiseen. Palautetta annettiin kerran sekunnissa joko näytöllä tai haptisilla elementeillä, jotka kiinnitettiin koehenkilön kaulan alaosaan. Koehenkilöt saavuttivat keskimäärin 67 % luokittelutuloksia haptisella palautteella ja 68 % visuaalisella palautteella. Yksi koehenkilö saavutti jopa 88.8 % luokittelutuloksen yhdessä sessiossa. Piirrevalinnalla löydetyt vakaat sensorimotoriset rytmit taajuuksien 8-12 Hz ja 18-26 Hz välissä tuottivat parhaimmat tulokset. Haptinen stimulaatio aiheutti vain vähän näkyvää häiriötä taajuusalueella 8-30 Hz. Tulokset tästä tutkimuksessa näyttävät, ettei haptisen ja visuaalisen palautteen välillä ole selkeää eroa koehenkilöiden oppimisessa. Suurin osa koehenkilöistä kokivat haptisen palautteen luonnolliseksi ja miellyttäväksi. Haptinen palaute voi näistä seikoista johtuen korvata visuaalisen palautteen ja vapauttaa näkökyvyn muihin tehtäviin. Tulosten vahvistamiseksi on tarpeellista tehdä jatkotutkimuksia liikuntakyvyttömillä oikeissa kotiympäristöissä.Efficient training of subjects with feedback is essential to brain-computer interface (BCI) research. In most previous studies, subjects have mostly received visual feedback; other feedback modalities could, however, better serve patients with visual impairment and in tasks, which allocate visual attention. In previous studies auditory feedback was significantly worse than visual feedback during subject training. Haptic feedback (vibrotactile stimulation) could be better suited for brain-computer communication than auditory feedback. Six able-bodied subjects without previous BCI experience received haptic or visual feedback or both in separate sessions while learning to control a two-class BCI using imagery of left and right hand movements. A BCI system was designed and implemented for the experiments. The TKK BCI consists of components capable of real-time signal acquisition, signal processing, feedback, and control of applications. The feedback was presented once every second either on a screen or with haptic elements attached to the base of the subject's neck. The subjects achieved average classification accuracies of 67% with haptic and 68% visual feedback. One subject achieved as high as 88.8% accuracy in a single session. Stable features selected from sensorimotor rhythms within the 8-12 Hz and 18-26 Hz frequency bands provided the highest accuracies. Only minor interference using haptic stimulation was observed within the 8-30 Hz frequency band. The results indicate no clear differences between learning with haptic or visual feedback. Most subjects felt haptic feedback natural and comfortable. Haptic feedback could thus substitute for visual feedback, and render vision available for other concurrent tasks. Further studies especially with motor-disabled patients in real home environments will be necessary to confirm the results

    A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation

    Get PDF
    This paper describes a new noninvasive brain-actuated wheelchair that relies on a P300 neurophysiological protocol and automated navigation. When in operation, the user faces a screen displaying a real-time virtual reconstruction of the scenario and concentrates on the location of the space to reach. A visual stimulation process elicits the neurological phenomenon, and the electroencephalogram (EEG) signal processing detects the target location. This location is transferred to the autonomous navigation system that drives the wheelchair to the desired location while avoiding collisions with obstacles in the environment detected by the laser scanner. This concept gives the user the flexibility to use the device in unknown and evolving scenarios. The prototype was validated with five healthy participants in three consecutive steps: screening (an analysis of three different groups of visual interface designs), virtual-environment driving, and driving sessions with the wheelchair. On the basis of the results, this paper reports the following evaluation studies: 1) a technical evaluation of the device and all functionalities; 2) a users’ behavior study; and 3) a variability study. The overall result was that all the participants were able to successfully operate the device with relative ease, thus showing a great adaptation as well as a high robustness and low variability of the system

    Decoding Neural Activity to Assess Individual Latent State in Ecologically Valid Contexts

    Full text link
    There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks

    Online Classifier Adaptation in Brain-Computer Interfaces

    Get PDF
    Brain-computer interfaces (BCIs) aim to provide a new channel of communication by enabling the subject to control an external systems by using purely mental commands. One method of doing this without invasive surgical procedures is by measuring the electrical activity of the brain on the scalp through electroencephalography (EEG). A major obstacle to developing complex EEG-based BCI systems that provide a number of intuitive mental commands is the high variability of EEG signals. EEG signals from the same subject vary considerably within a single session and between sessions on the same or different days. To deal with this we are investigating methods of adapting the classifier while it is being used by the subject. By keeping the classifier constantly tuned to the EEG signals of the current session we hope to improve the performance of the classifier and allow the subject to learn to use the BCI more effectively. This paper discusses preliminary offline and online experiments towards this goal, focusing on the initial training period when the task that the subject is trying to achieve is known and thus supervised adaptation methods can be used. In these experiments the subjects were asked to perform three mental commands (imagination of left and right hand movements, and a language task) and the EEG signals were classified with a Gaussian classifier
    corecore