168 research outputs found

    Human Computer Interactions for Amyotrophic Lateral Sclerosis Patients

    Get PDF

    A novel onset detection technique for brain?computer interfaces using sound-production related cognitive tasks in simulated-online system

    Get PDF
    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies?Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs

    Proposals and Comparisons from One-Sensor EEG and EOG Human-Machine Interfaces

    Get PDF
    [Abstract] Human-Machine Interfaces (HMI) allow users to interact with different devices such as computers or home elements. A key part in HMI is the design of simple non-invasive interfaces to capture the signals associated with the user’s intentions. In this work, we have designed two different approaches based on Electroencephalography (EEG) and Electrooculography (EOG). For both cases, signal acquisition is performed using only one electrode, which makes placement more comfortable compared to multi-channel systems. We have also developed a Graphical User Interface (GUI) that presents objects to the user using two paradigms—one-by-one objects or rows-columns of objects. Both interfaces and paradigms have been compared for several users considering interactions with home elements.Xunta de Galicia; ED431C 2020/15Xunta de Galicia; ED431G2019/01Agencia Estatal de Investigación de España; RED2018-102668-TAgencia Estatal de Investigación de España; PID2019-104958RB-C42Xunta de Galicia; ED481A-2018/156This work has been funded by the Xunta de Galicia (by grant ED431C 2020/15, and grant ED431G2019/01 to support the Centro de Investigación de Galicia “CITIC”), the Agencia Estatal de Investigación of Spain (by grants RED2018-102668-T and PID2019-104958RB-C42) and ERDF funds of the EU (FEDER Galicia & AEI/FEDER, UE); and the predoctoral Grant No. ED481A-2018/156 (Francisco Laport

    Towards a home-use BCI: fast asynchronous control and robust non-control state detection

    Get PDF
    Eine Hirn-Computer Schnittstelle (engl. Brain-Computer Interface, BCI) erlaubt einem Nutzer einen Computer nur mittels Gehirn-Aktivität zu steuern. Der Hauptanwendungszweck ist die Wiederherstellung verschiedener Funktionen von motorisch eingeschränkten Menschen, zum Beispiel, die Wiederherstellung der Kommunikationsfähigkeit. Bisherige BCIs die auf visuell evozierten Potentialen (VEPs) basieren, erlauben bereits hohe Kommunikationsgeschwindigkeiten. VEPs sind Reaktionen, die im Gehirn durch visuelle Stimulation hervorgerufen werden. Allerdings werden bisherige BCIs hauptsächlich in der Forschung verwendet und sind nicht für reale Anwendungszwecke geeignet. Grund dafür ist, dass sie auf dem synchronen Steuerungsprinzip beruhen, dies bedeutet, dass Aktionen nur in vorgegebenen Zeitslots ausgeführt werden können. Dies bedeutet wiederum, dass der Nutzer keine Aktionen nach seinem Belieben ausführen kann, was für reale Anwendungszwecke ein Problem darstellt. Um dieses Problem zu lösen, müssen BCIs die Intention des Nutzers, das System zu steuern oder nicht, erkennen. Solche BCIs werden asynchron oder selbstbestimmt genannt. Bisherige asynchrone BCIs zeigen allerdings keine ausreichende Genauigkeit bei der Erkennung der Intention und haben zudem eine deutlich reduzierte Kommunikationsgeschwindigkeit im Vergleich zu synchronen BCIs. In dieser Doktorarbeit wird das erste asynchrone BCI vorgestellt, welches sowohl eine annäherungsweise perfekte Erkennung der Intention des Nutzers als auch eine ähnliche Kommunikationsgeschwindigkeit wie synchrone BCIs erzielt. Dies wurde durch die Entwicklung eines allgemeinen Modells für die Vorhersage von sensorischen Reizen erzielt. Dadurch können beliebige visuelle Stimulationsmuster basierend auf den gemessenen VEPs vorhergesagt werden. Das Modell wurde sowohl mit einem "traditionellen" maschinellen Lernverfahren als auch mit einer deep-learning Methode implementiert und evaluiert. Das resultierende asynchrone BCI übertrifft bisherige Methoden in mehreren Disziplinen um ein Vielfaches und ist ein wesentlicher Schritt, um BCI-Anwendungen aus dem Labor in die Praxis zu bringen. Durch weitere Optimierungen, die in dieser Arbeit diskutiert werden, könnte es sich zum allerersten geeigneten BCI für Endanwender entwickeln, da es effektiv (hohe Genauigkeit), effizient (schnelle Klassifizierungen), und einfach zu bedienen ist. Ein weiteres Alleinstellungsmerkmal ist, dass das entwickelte BCI für beliebige Szenarien verwendet werden kann, da es annähernd unendlich viele gleichzeitige Aktionsfelder erlaubt.Brain-Computer Interfaces (BCIs) enable users to control a computer by using pure brain activity. Their main purpose is to restore several functionalities of motor disabled people, for example, to restore the communication ability. Recent BCIs based on visual evoked potentials (VEPs), which are brain responses to visual stimuli, have shown to achieve high-speed communication. However, BCIs have not really found their way out of the lab yet. This is mainly because all recent high-speed BCIs are based on synchronous control, which means commands can only be executed in time slots controlled by the BCI. Therefore, the user is not able to select a command at his own convenience, which poses a problem in real-world applications. Furthermore, all those BCIs are based on stimulation paradigms which restrict the number of possible commands. To be suitable for real-world applications, a BCI should be asynchronous, or also called self-paced, and must be able to identify the user’s intent to control the system or not. Although there some asynchronous BCI approaches, none of them achieved suitable real-world performances. In this thesis, the first asynchronous high-speed BCI is proposed, which allows using a virtually unlimited number of commands. Furthermore, it achieved a nearly perfect distinction between intentional control (IC) and non-control (NC), which means commands are only executed if the user intends to. This was achieved by a completely different approach, compared to recent methods. Instead of using a classifier trained on specific stimulation patterns, the presented approach is based on a general model that predicts arbitrary stimulation patterns. The approach was evaluated with a "traditional" as well as a deep machine learning method. The resultant asynchronous BCI outperforms recent methods by a multi-fold in multiple disciplines and is an essential step for moving BCI applications out of the lab and into real life. With further optimization, discussed in this thesis, it could evolve to the very first end-user suitable BCI, as it is effective (high accuracy), efficient (fast classifications), ease of use, and allows to perform as many different tasks as desired

    Brain-Computer Interfacing for Intelligent Systems

    Full text link

    Comparison between covert sound-production task (sound-imagery) vs. motor-imagery for onset detection in real-life online self-paced BCIs

    Get PDF
    Background Even though the BCI field has quickly grown in the last few years, it is still mainly investigated as a research area. Increased practicality and usability are required to move BCIs to the real-world. Self-paced (SP) systems would reduce the problem but there is still the big challenge of what is known as the ‘onset detection problem’. Methods Our previous studies showed how a new sound-imagery (SI) task, high-tone covert sound production, is very effective for onset detection scenarios and we expect there are several advantages over most common asynchronous approaches used thus far, i.e., motor-imagery (MI): 1) Intuitiveness; 2) benefits to people with motor disabilities and, especially, those with lesions on cortical motor areas; and 3) no significant overlap with other common, spontaneous cognitive states, making it easier to use in daily-life situations. The approach was compared with MI tasks in online real-life scenarios, i.e., during activities such as watching videos and reading text. In our scenario, when a new message prompt from a messenger program appeared on the screen, participants watching a video (or reading text, browsing images) were asked to open the message by executing the SI or MI tasks, respectively, for each experimental condition. Results The results showed the SI task performed statistically significantly better than the MI approach: 84.04% (SI) vs 66.79 (MI) True-False positive rate for the sliding image scenario, 80.84% vs 61.07% for watching video. The classification performance difference between SI and MI was found not to be significant in the text-reading scenario. Furthermore, the onset response speed showed SI (4.08 s) being significantly faster than MI (5.46 s). In terms of basic usability, 75% of subjects found SI easier to use. Conclusions Our novel SI task outperforms typical MI for SP onset detection BCIs, therefore it would be more easily used in daily-life situations. This could be a significant step forward for the BCI field which has so far been mainly restricted to research-oriented indoor laboratory settings

    Sound-production Related Cognitive Tasks for Onset Detection in Self-Paced Brain-Computer Interfaces

    Get PDF
    Objective. The main goal of this research is proposing a novel method of onset detection for Self-Paced (SP) Brain-Computer Interfaces (BCIs) to increase usability and practicality of BCIs towards real-world uses from laboratory research settings. Approach. To achieve this goal, various Sound-Production Related Cognitive Tasks (SPRCTs) were tested against idle state in offline and simulated-online experiments. An online experiment was then conducted that turned a messenger dialogue on when a new message arrived by executing the Sound Imagery (SI) onset detection task in real-life scenarios (e.g. watching video, reading text). The SI task was chosen as an onset task because of its advantages over other tasks: 1) Intuitiveness. 2) Beneficial for people with motor disabilities. 3) No significant overlap with other common, spontaneous cognitive states becoming easier to use in daily-life situations. 4) No dependence on user’s mother language. Main results. The final online experimental results showed the new SI onset task had significantly better performance than the Motor Imagery (MI) approach. 84.04% (SI) vs 66.79% (MI) TFP score for sliding image scenario, 80.84% vs 61.07% for watching video task. Furthermore, the onset response speed showed the SI task being significantly faster than MI. In terms of usability, 75% of subjects answered SI was easier to use. Significance. The new SPRCT outperforms typical MI for SP onset detection BCIs (significantly better performance, faster onset response and easier usability), therefore it would be more easily used in daily-life situations. Another contribution of this thesis is a novel EMG artefact-contaminated EEG channel selection and handling method that showed significant class separation improvement against typical blind source separation techniques. A new performance evaluation metric for SP BCIs, called true-false positive score was also proposed as a standardised performance assessment method that considers idle period length, which was not considered in other typical metrics

    Replacing Indirect Manual Assistive Solutions with Hands-Free, Direct Selection

    Get PDF
    Case study BK is a teenage male who suffers from severe cerebral palsy, making communication very difficult using his current assistive technology. His performance with a manual switch was compared to a hands-free system for computer interaction (Cyberlink Brainfingers/ NIA). BK uses a switch scanning menu, which steps through predetermined options till he chooses the current option being read aloud by pressing a button. A yes/no menu was used for the switch scanning interface for both manual and hands free conditions, as well as the point and click condition. In both hands-free conditions, BK was as fast and accurate as he was with his manual assistive solution that he has been using for almost 10 years now. Results indicate that a hands-free system is a valid assistive technology direction for BK. As in Marler (2004)- perhaps the greatest benefit from a point and click hands-free system could be increased engagement

    Design and evaluation of a time adaptive multimodal virtual keyboard

    Get PDF
    The usability of virtual keyboard based eyetyping systems is currently limited due to the lack of adaptive and user-centered approaches leading to low text entry rate and the need for frequent recalibration. In this work, we propose a set of methods for the dwell time adaptation in asynchronous mode and trial period in synchronous mode for gaze based virtual keyboards. The rules take into account commands that allow corrections in the application, and it has been tested on a newly developed virtual keyboard for a structurally complex language by using a two-stage tree-based character selection arrangement.We propose several dwell-based and dwell-free mechanisms with the multimodal access facility wherein the search of a target item is achieved through gaze detection and the selection can happen via the use of a dwell time, softswitch, or gesture detection using surface electromyography (sEMG) in asynchronous mode; while in the synchronous mode, both the search and selection may be performed with just the eye-tracker. The system performance is evaluated in terms of text entry rate and information transfer rate with 20 different experimental conditions. The proposed strategy for adapting theparameters over time has shown a signicant improvement (more than 40%) over non-adaptive approaches for new users. The multimodal dwell-free mechanismusing a combination of eye-tracking and soft-switch provides better performance than adaptive methods with eye-tracking only. The overall system receives an excellentgrade on adjective rating scale using the system usability scale and a low weighted rating on the NASA task load index, demonstrating the user-centered focus of the system
    corecore