181 research outputs found

    Brain-Computer Interface: Use of Electroencephalogram in Neuro-Rehabilitation

    Get PDF
    Brain-computer interface is a technology that has been under enormous research in the last few decades. It uses brain signals by converting them into action to control the external environment. The focus of the future is the application of such technology in rehabilitating patients with physical disabilities. This chapter will mainly explore the use of EEG (electroencephalogram), a popular non-invasive method, on which the brain-computer interface is based. The process of signal extraction, selection and classification will be discussed. The challenges and techniques in communication and rehabilitation of people with motor impairment, along with the recent research study in this field, will be mentioned

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    State-of-the-Art in BCI Research: BCI Award 2010

    Get PDF

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    Speech Feature Analysis and Discrimination in Biological Information

    Get PDF
    A silent speech interface is a system that allows people doing speech communication without using their own speech sounds. Today, a variety of speech interfaces have been developed using biological signals such as the eye movement, and the articulatory. These interfaces are mainly for supporting people who have speech disorder to communicate with others, yet there are many speech disorder that have not been addressed by the current technologies. The possible cause of the issue is the limited numbers of the biological signals used for the speech interface. The uncovered issues with speech disorders can be addressed through identifying new biological signals for speech interface development. Therefore, we aim to find new biological signals that can be used for speech interface developments. The biological signals we focused on were the vibration of the vocal folds and brain waves. After measuring the data and extracting the features, we verified whether this data can be used to classify speech sounds through machine learning models: Support Vector Machine for the vocal folds vibration, and Echo State Network for the brain waves. As a result, using the vocal folds vibration signals, Japanese vowels could be classified with 71 % accuracy on average. Using the brain waves, five different consonants were classified with 28.3 % accuracy on average. These findings indicate the possibility that the vocal folds vibration signals and the brain waves can be used as new biological signals for speech interface developments. From this study, we were able to discover some needed improvements that should be considered in the future that may lead to further improvement in the classification accuracy

    Exploring Effects of Background Music in A Serious Game on Attention by Means of EEG Signals in Children

    Get PDF
    Music and Serious Games are separately useful alternative therapy methods for helping people with a cognitive disorder, including Attention Deficit Hyperactivity Disorder (ADHD). The goal of this thesis is to explore the effect of background music on children with and without ADHD. In this study, a simple Tetris game is designed with Beethoven, Mozart music, and no-music. There are different brainwave techniques for recording; among others, the electroencephalography (EEG) allows for the most efficient use of BCI. We recorded the EEG brain signals of the regular and ADHD subjects who played the Tetris we designed according to our protocol that consists of three trials with three different background music. Attention related Alpha and Beta waves of EEG signals analyzed based on time and time-frequency domain features. The changes in the data over the 1-minute Tetris game sections are investigated with the Short-time Fourier Transform (STFT) method. The results showed that music has a considerable impact on attention of children. When it comes to music types, in general, Mozart music increases Beta waves while decreasing the Alpha band waves for subjects without ADHD. On the other hand, Beethoven music increased both Alpha and Beta band values for children with ADHD

    Language Model Applications to Spelling with Brain-Computer Interfaces

    Get PDF
    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models appli

    Leveraging EEG-based speech imagery brain-computer interfaces

    Get PDF
    Speech Imagery Brain-Computer Interfaces (BCIs) provide an intuitive and flexible way of interaction via brain activity recorded during imagined speech. Imagined speech can be decoded in form of syllables or words and captured even with non-invasive measurement methods as for example the Electroencephalography (EEG). Over the last decade, research in this field has made tremendous progress and prototypical implementations of EEG-based Speech Imagery BCIs are numerous. However, most work is still conducted in controlled laboratory environments with offline classification and does not find its way to real online scenarios. Within this thesis we identify three main reasons for these circumstances, namely, the mentally and physically exhausting training procedures, insufficient classification accuracies and cumbersome EEG setups with usually high-resolution headsets. We furthermore elaborate on possible solutions to overcome the aforementioned problems and present and evaluate new methods in each of the domains. In detail we introduce two new training concepts for imagined speech BCIs, one based on EEG activity during silently reading and the other recorded during overtly speaking certain words. Insufficient classification accuracies are addressed by introducing the concept of a Semantic Speech Imagery BCI, which classifies the semantic category of an imagined word prior to the word itself to increase the performance of the system. Finally, we investigate on different techniques for electrode reduction in Speech Imagery BCIs and aim at finding a suitable subset of electrodes for EEG-based imagined speech detection, therefore facilitating the cumbersome setups. All of our presented results together with general remarks on experiences and best practice for study setups concerning imagined speech are summarized and supposed to act as guidelines for further research in the field, thereby leveraging Speech Imagery BCIs towards real-world application.Speech Imagery Brain-Computer Interfaces (BCIs) bieten eine intuitive und flexible Möglichkeit der Interaktion mittels GehirnaktivitĂ€t, aufgezeichnet wĂ€hrend der bloßen Vorstellung von Sprache. Vorgestellte Sprache kann in Form von Silben oder Wörtern auch mit nicht-invasiven Messmethoden wie der Elektroenzephalographie (EEG) gemessen und entschlĂŒsselt werden. In den letzten zehn Jahren hat die Forschung auf diesem Gebiet enorme Fortschritte gemacht, und es gibt zahlreiche prototypische Implementierungen von EEG-basierten Speech Imagery BCIs. Die meisten Arbeiten werden jedoch immer noch in kontrollierten Laborumgebungen mit Offline-Klassifizierung durchgefĂŒhrt und finden nicht denWeg in reale Online-Szenarien. In dieser Arbeit identifizieren wir drei HauptgrĂŒnde fĂŒr diesen Umstand, nĂ€mlich die geistig und körperlich anstrengenden Trainingsverfahren, unzureichende Klassifizierungsgenauigkeiten und umstĂ€ndliche EEG-Setups mit meist hochauflösenden Headsets. DarĂŒber hinaus erarbeiten wir mögliche Lösungen zur Überwindung der oben genannten Probleme und prĂ€sentieren und evaluieren neue Methoden fĂŒr jeden dieser Bereiche. Im Einzelnen stellen wir zwei neue Trainingskonzepte fĂŒr Speech Imagery BCIs vor, von denen eines auf der Messung von EEG-AktivitĂ€t wĂ€hrend des stillen Lesens und das andere auf der AktivitĂ€t wĂ€hrend des Aussprechens bestimmter Wörter basiert. Unzureichende Klassifizierungsgenauigkeiten werden durch die EinfĂŒhrung des Konzepts eines Semantic Speech Imagery BCI angegangen, das die semantische Kategorie eines vorgestellten Wortes vor dem Wort selbst klassifiziert, um die Performance des Systems zu erhöhen. Schließlich untersuchen wir verschiedene Techniken zur Elektrodenreduktion bei Speech Imagery BCIs und zielen darauf ab, eine geeignete Teilmenge von Elektroden fĂŒr die EEG-basierte Erkennung von vorgestellter Sprache zu finden, um so die umstĂ€ndlichen Setups zu erleichtern. Alle unsere Ergebnisse werden zusammen mit allgemeinen Bemerkungen zu Erfahrungen und Best Practices fĂŒr Studien-Setups bezĂŒglich vorgestellter Sprache zusammengefasst und sollen als Richtlinien fĂŒr die weitere Forschung auf diesem Gebiet dienen, um so Speech Imagery BCIs fĂŒr die Anwendung in der realenWelt zu optimieren

    Supervised and unsupervised training of deep autoencoder

    Get PDF
    2017 Fall.Includes bibliographical references.Deep learning has proven to be a very useful approach to learn complex data. Recent research in the fields of speech recognition, visual object recognition, natural language processing shows that deep generative models, which contain many layers of latent features, can learn complex data very efficiently. An autoencoder neural network with multiple layers can be used as a deep network to learn complex patterns in data. As training a multiple layer neural network is time consuming, a pre-training step has been employed to initialize the weights of a deep network to speed up the training process. In the pre-training step, each layer is trained individually and the output of each layer is wired to the input of the successive layers. After the pre-training, all the layers are stacked together to form the deep network, and then post training, also known as fine tuning, is done on the whole network to further improve the solution. The aforementioned way of training a deep network is known as stacked autoencoding and the deep neural network architecture is known as stack autoencoder. It is a very useful tool for classification as well as low dimensionality reduction. In this research we propose two new approaches to pre-train a deep autoencoder. We also propose a new supervised learning algorithm, called Centroid-encoding, which shows promising results in low dimensional embedding and classification. We use EEG data, gene expression data and MNIST hand written data to demonstrate the usefulness of our proposed methods

    Identification of EEG signal patterns between adults with dyslexia and normal controls

    Get PDF
    Electroencephalography (EEG) is one of the most useful techniques used to represent behaviours of the brain and helps explore valuable insights through the measurement of brain electrical activity. Hence, it plays a vital role in detecting neurological disorders such as epilepsy. Dyslexia is a hidden learning disability with a neurological origin affecting a significant amount of the world population. Studies show unique brain structures and behaviours in individuals with dyslexia and these variations have become more evident with the use of techniques such as EEG, Functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG) and Positron Emission Tomography (PET). In this thesis, we are particularly interested in discussing the use of EEG to explore unique brain activities of adults with dyslexia. We attempt to discover unique EEG signal patterns between adults with dyslexia compared to normal controls while performing tasks that are more challenging for individuals with dyslexia. These tasks include real--‐word reading, nonsense--‐ word reading, passage reading, Rapid Automatized Naming (RAN), writing, typing, browsing the web, table interpretation and typing of random numbers. Each participant was instructed to perform these specific tasks while staying seated in front of a computer screen with the EEG headset setup on his or her head. The EEG signals captured during these tasks were examined using a machine learning classification framework, which includes signal preprocessing, frequency sub--‐band decomposition, feature extraction, classification and verification. Cubic Support Vector Machine (CSVM) classifiers were developed for separate brain regions of each specified task in order to determine the optimal brain regions and EEG sensors that produce the most unique EEG signal patterns between the two groups. The research revealed that adults with dyslexia generated unique EEG signal patterns compared to normal controls while performing the specific tasks. One of the vital discoveries of this research was that the nonsense--‐words classifiers produced higher Validation Accuracies (VA) compared to real--‐ words classifiers, confirming difficulties in phonological decoding skills seen in individuals with dyslexia are reflected in the EEG signal patterns, which was detected in the left parieto--‐occipital. It was also uncovered that all three reading tasks showed the same optimal brain region, and RAN which is known to have a relationship to reading also showed optimal performance in an overlapping region, demonstrating the likelihood that the association between reading and RAN reflects in the EEG signal patterns. Finally, we were able to discover brain regions that produced exclusive EEG signal patterns between the two groups that have not been reported before for writing, typing, web browsing, table interpretation and typing of random numbers
    • 

    corecore