3,450 research outputs found

    Bacteria Hunt: Evaluating multi-paradigm BCI interaction

    Get PDF
    The multimodal, multi-paradigm brain-computer interfacing (BCI) game Bacteria Hunt was used to evaluate two aspects of BCI interaction in a gaming context. One goal was to examine the effect of feedback on the ability of the user to manipulate his mental state of relaxation. This was done by having one condition in which the subject played the game with real feedback, and another with sham feedback. The feedback did not seem to affect the game experience (such as sense of control and tension) or the objective indicators of relaxation, alpha activity and heart rate. The results are discussed with regard to clinical neurofeedback studies. The second goal was to look into possible interactions between the two BCI paradigms used in the game: steady-state visually-evoked potentials (SSVEP) as an indicator of concentration, and alpha activity as a measure of relaxation. SSVEP stimulation activates the cortex and can thus block the alpha rhythm. Despite this effect, subjects were able to keep their alpha power up, in compliance with the instructed relaxation task. In addition to the main goals, a new SSVEP detection algorithm was developed and evaluated

    Recent and upcoming BCI progress: overview, analysis, and recommendations

    Get PDF
    Brain–computer interfaces (BCIs) are finally moving out of the laboratory and beginning to gain acceptance in real-world situations. As BCIs gain attention with broader groups of users, including persons with different disabilities and healthy users, numerous practical questions gain importance. What are the most practical ways to detect and analyze brain activity in field settings? Which devices and applications are most useful for different people? How can we make BCIs more natural and sensitive, and how can BCI technologies improve usability? What are some general trends and issues, such as combining different BCIs or assessing and comparing performance? This book chapter provides an overview of the different sections of this book, providing a summary of how authors address these and other questions. We also present some predictions and recommendations that ensue from our experience from discussing these and other issues with our authors and other researchers and developers within the BCI community. We conclude that, although some directions are hard to predict, the field is definitely growing and changing rapidly, and will continue doing so in the next several years

    Human-Computer Interaction for BCI Games: Usability and User Experience

    Get PDF
    Brain-computer interfaces (BCI) come with a lot of issues, such as delays, bad recognition, long training times, and cumbersome hardware. Gamers are a large potential target group for this new interaction modality, but why would healthy subjects want to use it? BCI provides a combination of information and features that no other input modality can offer. But for general acceptance of this technology, usability and user experience will need to be taken into account when designing such systems. This paper discusses the consequences of applying knowledge from Human-Computer Interaction (HCI) to the design of BCI for games. The integration of HCI with BCI is illustrated by research examples and showcases, intended to take this promising technology out of the lab. Future research needs to move beyond feasibility tests, to prove that BCI is also applicable in realistic, real-world settings

    Guidelines for Feature Matching Assessment of Brain–Computer Interfaces for Augmentative and Alternative Communication

    Get PDF
    Purpose--Brain–computer interfaces (BCIs) can provide access to augmentative and alternative communication (AAC) devices using neurological activity alone without voluntary movements. As with traditional AAC access methods, BCI performance may be influenced by the cognitive–sensory–motor and motor imagery profiles of those who use these devices. Therefore, we propose a person-centered, feature matching framework consistent with clinical AAC best practices to ensure selection of the most appropriate BCI technology to meet individuals\u27 communication needs. Method--The proposed feature matching procedure is based on the current state of the art in BCI technology and published reports on cognitive, sensory, motor, and motor imagery factors important for successful operation of BCI devices. Results--Considerations for successful selection of BCI for accessing AAC are summarized based on interpretation from a multidisciplinary team with experience in AAC, BCI, neuromotor disorders, and cognitive assessment. The set of features that support each BCI option are discussed in a hypothetical case format to model possible transition of BCI research from the laboratory into clinical AAC applications. Conclusions--This procedure is an initial step toward consideration of feature matching assessment for the full range of BCI devices. Future investigations are needed to fully examine how person-centered factors influence BCI performance across devices

    Influence of Auditory Cues on the Neuronal Response to Naturalistic Visual Stimuli in a Virtual Reality Setting

    Full text link
    Virtual reality environments offer great opportunities to study the performance of brain-computer interfaces (BCIs) in real-world contexts. As real-world stimuli are typically multimodal, their neuronal integration elicits complex response patterns. To investigate the effect of additional auditory cues on the processing of visual information, we used virtual reality to mimic safety-related events in an industrial environment while we concomitantly recorded electroencephalography (EEG) signals. We simulated a box traveling on a conveyor belt system where two types of stimuli – an exploding and a burning box – interrupt regular operation. The recordings from 16 subjects were divided into two subsets, a visual-only and an audio-visual experiment. In the visual-only experiment, the response patterns for both stimuli elicited a similar pattern – a visual evoked potential (VEP) followed by an event-related potential (ERP) over the occipital-parietal lobe. Moreover, we found the perceived severity of the event to be reflected in the signal amplitude. Interestingly, the additional auditory cues had a twofold effect on the previous findings: The P1 component was significantly suppressed in the case of the exploding box stimulus, whereas the N2c showed an enhancement for the burning box stimulus. This result highlights the impact of multisensory integration on the performance of realistic BCI applications. Indeed, we observed alterations in the offline classification accuracy for a detection task based on a mixed feature extraction (variance, power spectral density, and discrete wavelet transform) and a support vector machine classifier. In the case of the explosion, the accuracy slightly decreased by –1.64% p. in an audio-visual experiment compared to the visual-only. Contrarily, the classification accuracy for the burning box increased by 5.58% p. when additional auditory cues were present. Hence, we conclude, that especially in challenging detection tasks, it is favorable to consider the potential of multisensory integration when BCIs are supposed to operate under (multimodal) real-world conditions

    Bacteria Hunt: A multimodal, multiparadigm BCI game

    Get PDF
    Brain-Computer Interfaces (BCIs) allow users to control applications by brain activity. Among their possible applications for non-disabled people, games are promising candidates. BCIs can enrich game play by the mental and affective state information they contain. During the eNTERFACE’09 workshop we developed the Bacteria Hunt game which can be played by keyboard and BCI, using SSVEP and relative alpha power. We conducted experiments in order to investigate what difference positive vs. negative neurofeedback would have on subjects’ relaxation states and how well the different BCI paradigms can be used together. We observed no significant difference in mean alpha band power, thus relaxation, and in user experience between the games applying positive and negative feedback. We also found that alpha power before SSVEP stimulation was significantly higher than alpha power during SSVEP stimulation indicating that there is some interference between the two BCI paradigms

    Functional Source Separation for EEG-fMRI Fusion: Application to Steady-State Visual Evoked Potentials

    Get PDF
    Neurorobotics is one of the most ambitious fields in robotics, driving integration of interdisciplinary data and knowledge. One of the most productive areas of interdisciplinary research in this area has been the implementation of biologically-inspired mechanisms in the development of autonomous systems. Specifically, enabling such systems to display adaptive behavior such as learning from good and bad outcomes, has been achieved by quantifying and understanding the neural mechanisms of the brain networks mediating adaptive behaviors in humans and animals. For example, associative learning from aversive or dangerous outcomes is crucial for an autonomous system, to avoid dangerous situations in the future. A body of neuroscience research has suggested that the neurocomputations in the human brain during associative learning involve re-shaping of sensory responses. The nature of these adaptive changes in sensory processing during learning however are not yet well enough understood to be readily implemented into on-board algorithms for robotics application. Toward this overall goal, we record the simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), characterizing one candidate mechanism, i.e., large-scale brain oscillations. The present report examines the use of Functional Source Separation (FSS) as an optimization step in EEG-fMRI fusion that harnesses timing information to constrain the solutions that satisfy physiological assumptions. We applied this approach to the voxel-wise correlation of steady-state visual evoked potential (ssVEP) amplitude and blood oxygen level-dependent imaging (BOLD), across both time series. The results showed the benefit of FSS for the extraction of robust ssVEP signals during simultaneous EEG-fMRI recordings. Applied to data from a 3-phase aversive conditioning paradigm, the correlation maps across the three phases (habituation, acquisition, extinction) show converging results, notably major overlapping areas in both primary and extended visual cortical regions, including calcarine sulcus, lingual cortex, and cuneus. In addition, during the acquisition phase when aversive learning occurs, we observed additional correlations between ssVEP and BOLD in the anterior cingulate cortex (ACC) as well as the precuneus and superior temporal gyrus
    corecore