4 research outputs found

    Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI

    No full text
    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system

    EEG-based classification of visual and auditory monitoring tasks

    Get PDF
    Using EEG signals for mental workload detection has received particular attention in passive BCI research aimed at increasing safety and performance in high-risk and safety-critical occupations, like pilots and air traffic controllers. Along with detecting the level of mental workload, it has been suggested that being able to automatically detect the type of mental workload (e.g., auditory, visual, motor, cognitive) would also be useful. In this work, a novel experimental protocol was developed in which subjects performed a task involving one of two different types of mental workload (specifically, auditory and visual), each under two different levels of task demand (easy and difficult). The tasks were designed to be nearly identical in terms of visual and auditory stimuli, and differed only in the type of stimuli the subject was monitoring/attending to. EEG power spectral features were extracted and used to train linear and non-linear classifiers. Preliminary results on six subjects suggested that the auditory and visual tasks could be distinguished from one another, and individually from a baseline condition (which also contained nearly identical stimuli that the subject did not need to attend to at all), with accuracy significantly exceeding chance. This was true when classification was done within a workload level, and when data from the two workload levels were combined. Preliminary results also showed that tasks with easy and difficult trials could be distinguished from one another, each within a sensory domain (auditory and visual) as well as with both domains combined. Though further investigation is required, these preliminary results are promising, and suggest the feasibility of a passive BCI for detecting both type and level of mental workload

    Die Kraft der Gedanken

    Get PDF
    corecore