1,920 research outputs found
Hierarchical Deep Feature Learning For Decoding Imagined Speech From EEG
We propose a mixed deep neural network strategy, incorporating parallel
combination of Convolutional (CNN) and Recurrent Neural Networks (RNN),
cascaded with deep autoencoders and fully connected layers towards automatic
identification of imagined speech from EEG. Instead of utilizing raw EEG
channel data, we compute the joint variability of the channels in the form of a
covariance matrix that provide spatio-temporal representations of EEG. The
networks are trained hierarchically and the extracted features are passed onto
the next network hierarchy until the final classification. Using a publicly
available EEG based speech imagery database we demonstrate around 23.45%
improvement of accuracy over the baseline method. Our approach demonstrates the
promise of a mixed DNN approach for complex spatial-temporal classification
problems.Comment: Accepted in AAAI 2019 under Student Abstract and Poster Progra
Development of speech prostheses: current status and recent advances
This is an Accepted Manuscript of an article published by Taylor & Francis in Expert Review of Medical Devices on September, 2010, available online: http://www.tandfonline.com/10.1586/erd.10.34.Brain–computer interfaces (BCIs) have been developed over the past decade to restore communication to persons with severe paralysis. In the most severe cases of paralysis, known as locked-in syndrome, patients retain cognition and sensation, but are capable of only slight voluntary eye movements. For these patients, no standard communication method is available, although some can use BCIs to communicate by selecting letters or words on a computer. Recent research has sought to improve on existing techniques by using BCIs to create a direct prediction of speech utterances rather than to simply control a spelling device. Such methods are the first steps towards speech prostheses as they are intended to entirely replace the vocal apparatus of paralyzed users. This article outlines many well known methods for restoration of communication by BCI and illustrates the difference between spelling devices and direct speech prediction or speech prosthesis
Advancing Pattern Recognition Techniques for Brain-Computer Interfaces: Optimizing Discriminability, Compactness, and Robustness
In dieser Dissertation formulieren wir drei zentrale Zielkriterien zur systematischen Weiterentwicklung der Mustererkennung moderner Brain-Computer Interfaces (BCIs). Darauf aufbauend wird ein Rahmenwerk zur Mustererkennung von BCIs entwickelt, das die drei Zielkriterien durch einen neuen Optimierungsalgorithmus vereint. Darüber hinaus zeigen wir die erfolgreiche Umsetzung unseres Ansatzes für zwei innovative BCI Paradigmen, für die es bisher keine etablierte Mustererkennungsmethodik gibt
A detailed investigation of classification methods for vowel speech imagery recognition
Accurate and fast decoding of speech imagery from electroencephalographic (EEG) data could serve as a basis for a new generation of brain computer interfaces (BCIs), more portable and easier to use. However, decoding of speech imagery from EEG is a hard problem due to many factors. In this paper we focus on the analysis of the classification step of speech imagery decoding for a three-class vowel speech imagery recognition problem. We empirically show that different classification subtasks may require different classifiers for accurately decoding and obtain a classification accuracy that improves the best results previously published. We further investigate the relationship between the classifiers and different sets of features selected by the common spatial patterns method. Our results indicate that further improvement on BCIs based on speech imagery could be achieved by carefully selecting an appropriate combination of classifiers for the subtasks involved
Inner speech recognition through electroencephalographic signals
This work focuses on inner speech recognition starting from EEG signals.
Inner speech recognition is defined as the internalized process in which the
person thinks in pure meanings, generally associated with an auditory imagery
of own inner "voice". The decoding of the EEG into text should be understood as
the classification of a limited number of words (commands) or the presence of
phonemes (units of sound that make up words). Speech-related BCIs provide
effective vocal communication strategies for controlling devices through speech
commands interpreted from brain signals, improving the quality of life of
people who have lost the capability to speak, by restoring communication with
their environment. Two public inner speech datasets are analysed. Using this
data, some classification models are studied and implemented starting from
basic methods such as Support Vector Machines, to ensemble methods such as the
eXtreme Gradient Boosting classifier up to the use of neural networks such as
Long Short Term Memory (LSTM) and Bidirectional Long Short Term Memory
(BiLSTM). With the LSTM and BiLSTM models, generally not used in the literature
of inner speech recognition, results in line with or superior to those present
in the stateof-the-art are obtained.Comment: Submitted to the Italian Workshop on Artificial Intelligence for
Human Machine Interaction (AIxHMI 2022), December 02, 2022, Udine, Ital
- …