283 research outputs found

    Designing an Interval Type-2 Fuzzy Logic System for Handling Uncertainty Effects in Brain–Computer Interface Classification of Motor Imagery Induced EEG Patterns

    Get PDF
    One of the urgent challenges in the automated analysis and interpretation of electrical brain activity is the effective handling of uncertainties associated with the complexity and variability of brain dynamics, reflected in the nonstationary nature of brain signals such as electroencephalogram (EEG). This poses a severe problem for existing approaches to the classification task within brain–computer interface (BCI) systems. Recently emerged type-2 fuzzy logic (T2FL) methodology has shown a remarkable potential in dealing with uncertain information given limited insight into the nature of the data generating mechanism. The objective of this work is thus to examine the applicability of T2FL approach to the problem of EEG pattern recognition. In particular, the focus is two-fold: i) the design methodology for the interval T2FL system (IT2FLS) that can robustly deal with inter-session as well as within-session manifestations of nonstationary spectral EEG correlates of motor imagery (MI), and ii) the comprehensive examination of the proposed fuzzy classifier in both off-line and on-line EEG classification case studies. The on-line evaluation of the IT2FLS-controlled real-time neurofeedback over multiple recording sessions holds special importance for EEG-based BCI technology. In addition, a retrospective comparative analysis accounting for other popular BCI classifiers such as linear discriminant analysis (LDA), kernel Fisher discriminant (KFD) and support vector machines (SVMs) as well as a conventional type-1 FLS (T1FLS), simulated off-line on the recorded EEGs, has demonstrated the enhanced potential of the proposed IT2FLS approach to robustly handle uncertainty effects in BCI classification

    Brain-Computer Interface for Control of Wheelchair Using Fuzzy Neural Networks

    Get PDF

    Hand (Motor) Movement Imagery Classification of EEG Using Takagi-Sugeno-Kang Fuzzy-Inference Neural Network

    Get PDF
    Approximately 20 million people in the United States suffer from irreversible nerve damage and would benefit from a neuroprosthetic device modulated by a Brain-Computer Interface (BCI). These devices restore independence by replacing peripheral nervous system functions such as peripheral control. Although there are currently devices under investigation, contemporary methods fail to offer adaptability and proper signal recognition for output devices. Human anatomical differences prevent the use of a fixed model system from providing consistent classification performance among various subjects. Furthermore, notoriously noisy signals such as Electroencephalography (EEG) require complex measures for signal detection. Therefore, there remains a tremendous need to explore and improve new algorithms. This report investigates a signal-processing model that is better suited for BCI applications because it incorporates machine learning and fuzzy logic. Whereas traditional machine learning techniques utilize precise functions to map the input into the feature space, fuzzy-neuro system apply imprecise membership functions to account for uncertainty and can be updated via supervised learning. Thus, this method is better equipped to tolerate uncertainty and improve performance over time. Moreover, a variation of this algorithm used in this study has a higher convergence speed. The proposed two-stage signal-processing model consists of feature extraction and feature translation, with an emphasis on the latter. The feature extraction phase includes Blind Source Separation (BSS) and the Discrete Wavelet Transform (DWT), and the feature translation stage includes the Takagi-Sugeno-Kang Fuzzy-Neural Network (TSKFNN). Performance of the proposed model corresponds to an average classification accuracy of 79.4 % for 40 subjects, which is higher than the standard literature values, 75%, making this a superior model

    Leveraging EEG-based speech imagery brain-computer interfaces

    Get PDF
    Speech Imagery Brain-Computer Interfaces (BCIs) provide an intuitive and flexible way of interaction via brain activity recorded during imagined speech. Imagined speech can be decoded in form of syllables or words and captured even with non-invasive measurement methods as for example the Electroencephalography (EEG). Over the last decade, research in this field has made tremendous progress and prototypical implementations of EEG-based Speech Imagery BCIs are numerous. However, most work is still conducted in controlled laboratory environments with offline classification and does not find its way to real online scenarios. Within this thesis we identify three main reasons for these circumstances, namely, the mentally and physically exhausting training procedures, insufficient classification accuracies and cumbersome EEG setups with usually high-resolution headsets. We furthermore elaborate on possible solutions to overcome the aforementioned problems and present and evaluate new methods in each of the domains. In detail we introduce two new training concepts for imagined speech BCIs, one based on EEG activity during silently reading and the other recorded during overtly speaking certain words. Insufficient classification accuracies are addressed by introducing the concept of a Semantic Speech Imagery BCI, which classifies the semantic category of an imagined word prior to the word itself to increase the performance of the system. Finally, we investigate on different techniques for electrode reduction in Speech Imagery BCIs and aim at finding a suitable subset of electrodes for EEG-based imagined speech detection, therefore facilitating the cumbersome setups. All of our presented results together with general remarks on experiences and best practice for study setups concerning imagined speech are summarized and supposed to act as guidelines for further research in the field, thereby leveraging Speech Imagery BCIs towards real-world application.Speech Imagery Brain-Computer Interfaces (BCIs) bieten eine intuitive und flexible Möglichkeit der Interaktion mittels Gehirnaktivität, aufgezeichnet während der bloßen Vorstellung von Sprache. Vorgestellte Sprache kann in Form von Silben oder Wörtern auch mit nicht-invasiven Messmethoden wie der Elektroenzephalographie (EEG) gemessen und entschlüsselt werden. In den letzten zehn Jahren hat die Forschung auf diesem Gebiet enorme Fortschritte gemacht, und es gibt zahlreiche prototypische Implementierungen von EEG-basierten Speech Imagery BCIs. Die meisten Arbeiten werden jedoch immer noch in kontrollierten Laborumgebungen mit Offline-Klassifizierung durchgeführt und finden nicht denWeg in reale Online-Szenarien. In dieser Arbeit identifizieren wir drei Hauptgründe für diesen Umstand, nämlich die geistig und körperlich anstrengenden Trainingsverfahren, unzureichende Klassifizierungsgenauigkeiten und umständliche EEG-Setups mit meist hochauflösenden Headsets. Darüber hinaus erarbeiten wir mögliche Lösungen zur Überwindung der oben genannten Probleme und präsentieren und evaluieren neue Methoden für jeden dieser Bereiche. Im Einzelnen stellen wir zwei neue Trainingskonzepte für Speech Imagery BCIs vor, von denen eines auf der Messung von EEG-Aktivität während des stillen Lesens und das andere auf der Aktivität während des Aussprechens bestimmter Wörter basiert. Unzureichende Klassifizierungsgenauigkeiten werden durch die Einführung des Konzepts eines Semantic Speech Imagery BCI angegangen, das die semantische Kategorie eines vorgestellten Wortes vor dem Wort selbst klassifiziert, um die Performance des Systems zu erhöhen. Schließlich untersuchen wir verschiedene Techniken zur Elektrodenreduktion bei Speech Imagery BCIs und zielen darauf ab, eine geeignete Teilmenge von Elektroden für die EEG-basierte Erkennung von vorgestellter Sprache zu finden, um so die umständlichen Setups zu erleichtern. Alle unsere Ergebnisse werden zusammen mit allgemeinen Bemerkungen zu Erfahrungen und Best Practices für Studien-Setups bezüglich vorgestellter Sprache zusammengefasst und sollen als Richtlinien für die weitere Forschung auf diesem Gebiet dienen, um so Speech Imagery BCIs für die Anwendung in der realenWelt zu optimieren
    • …
    corecore