52 research outputs found
BCI applications based on artificial intelligence oriented to deep learning techniques
A Brain-Computer Interface, BCI, can decode the brain signals corresponding to the intentions of individuals who have lost neuromuscular connection, to reestablish communication to control external devices. To this aim, BCI acquires brain signals as Electroencephalography (EEG) or Electrocorticography (ECoG), uses signal processing techniques and extracts features to train classifiers for providing proper control instructions. BCI development has increased in the last decades, improving its performance through the use of different signal processing techniques for feature extraction and artificial intelligence approaches for classification, such as deep learning-oriented classifiers. All of these can assure more accurate assistive systems but also can enable an analysis of the learning process of signal characteristics for the classification task. Initially, this work proposes the use of a priori knowledge and a correlation measure to select the most discriminative ECoG signal electrodes. Then, signals are processed using spatial filtering and three different types of temporal filtering, followed by a classifier made of stacked autoencoders and a softmax layer to discriminate between ECoG signals from two types of visual stimuli. Results show that the average accuracy obtained is 97% (+/- 0.02%), which is similar to state-of-the-art techniques, nevertheless, this method uses minimal prior physiological and an automated statistical technique to select some electrodes to train the classifier. Also, this work presents classifier analysis, figuring out which are the most relevant signal features useful for visual stimuli classification. The features and physiological information such as the brain areas involved are compared. Finally, this research uses Convolutional Neural Networks (CNN) or Convnets to classify 5 categories of motor tasks EEG signals. Movement-related cortical potentials (MRCPs) are used as a priori information to improve the processing of time-frequency representation of EEG signals. Results show an increase of more than 25% in average accuracy compared to a state-of-the-art method that uses the same database. In addition, an analysis of CNN or ConvNets filters and feature maps is done to and the most relevant signal characteristics that can help classify the five types of motor tasks.DoctoradoDoctor en IngenierĂa ElĂ©ctrica y ElectrĂłnic
Recommended from our members
Characterizing Unstructured Motor Behaviors in the Epilepsy Monitoring Unit
Key advancements in recording hardware, data computation, clinical care, and cognitive science continue to drive new possibilities in how humans and machines can interact directly through thought. Neural data analyses with these advancements has progressed neuroscience research in functional brain mapping and brain-computer interfaces (BCIs). Much of our knowledge about BCIs is informed by data collected through carefully controlled experiments. Constraining BCI experiments with structured paradigms allows researchers to collect a high number of consistent data in a short amount of time, while also controlling for external confounds. Very little is currently known about how well these task-based relationships extend to daily life, in part because collecting data outside of the lab is challenging. To further understand natural brain activity, we must study more complex behaviors in more environmentally relevant settings. The results of this dissertation address three general challenges to studying neural correlates to unstructured behaviors. First, we continuously monitored unstructured human movements in the epilepsy monitoring unit using a video sensor synchronized to clinical intracortical electrodes. Second, we annotated unstructured behaviors from these video using both manual and computer vision methods. Finally, analyzed neural features with respect to unstructured human movements, and evaluated the performance of features identified in previous task-based studies. The preliminary nature of this work means that a majority of our demonstrations are whether the continuous paradigm can be leveraged, how one might go about leveraging it, and evaluations that tie our results back to earlier task-based studies. Our advances here motivate future works that focus more intently on what types of behaviors and neural signal features to explore
Speech Processes for Brain-Computer Interfaces
Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs
Workshops of the Sixth International Brain–Computer Interface Meeting: brain–computer interfaces past, present, and future
Brain–computer interfaces (BCI) (also referred to as brain–machine interfaces; BMI) are, by definition, an interface between the human brain and a technological application. Brain activity for interpretation by the BCI can be acquired with either invasive or non-invasive methods. The key point is that the signals that are interpreted come directly from the brain, bypassing sensorimotor output channels that may or may not have impaired function. This paper provides a concise glimpse of the breadth of BCI research and development topics covered by the workshops of the 6th International Brain–Computer Interface Meeting
Data-Driven Transducer Design and Identification for Internally-Paced Motor Brain Computer Interfaces: A Review
Brain-Computer Interfaces (BCIs) are systems that establish a direct communication pathway between the users' brain activity and external effectors. They offer the potential to improve the quality of life of motor-impaired patients. Motor BCIs aim to permit severely motor-impaired users to regain limb mobility by controlling orthoses or prostheses. In particular, motor BCI systems benefit patients if the decoded actions reflect the users' intentions with an accuracy that enables them to efficiently interact with their environment. One of the main challenges of BCI systems is to adapt the BCI's signal translation blocks to the user to reach a high decoding accuracy. This paper will review the literature of data-driven and user-specific transducer design and identification approaches and it focuses on internally-paced motor BCIs. In particular, continuous kinematic biomimetic and mental-task decoders are reviewed. Furthermore, static and dynamic decoding approaches, linear and non-linear decoding, offline and real-time identification algorithms are considered. The current progress and challenges related to the design of clinical-compatible motor BCI transducers are additionally discussed
A Brain-Computer Interface based on Colour Dependent Visual Attention
In this thesis we designed a specific visual protocol for a new application in the brain-computer interface field. We evaluated how coloured stimuli affect brain activity in health
- …