416 research outputs found

    MEG:hen perustuvan aivo-tietokone -käyttöliittymän kehitys

    Get PDF
    Brain–computer interfaces (BCI) have recently gained interest both in basic neuroscience and clinical interventions. The majority of noninvasive BCIs measure brain activity with electroencephalography (EEG). However, the real-time signal analysis and decoding of brain activity suffer from low signal-to-noise ratio and poor spatial resolution of EEG. These limitations could be overcome by using magnetoencephalography (MEG) as an alternative measurement modality. The aim of this thesis is to develop an MEG-based BCI for decoding hand motor imagery, which could eventually serve as a therapeutic method for patients recovering from e.g. cerebral stroke. Here, machine learning methods for decoding motor imagery -related brain activity are validated with healthy subjects’ MEG measurements. The first part of the thesis (Study I) involves a comparison of feature extraction methods for classifying left- vs right-hand motor imagery (MI), and MI vs rest. It was found that spatial filtering and further extraction of bandpower features yield better classification accuracy than time–frequency features extracted from parietal gradiometers. Furthermore, prior spatial filtering improved the discrimination capability of time–frequency features. The training data for a BCI is typically collected in the beginning of each measurement session. However, as this can be time-consuming and exhausting for the subject, the training data from other subjects’ measurements could be used as well. In the second part of the thesis (Study II), methods for across-subject classification of MI were compared. The results showed that a classifier based on multi-task learning with a l2,1-norm regularized logistic regression was the best method for across-subject decoding for both MEG and EEG. In Study II, we also compared the decoding results of simultaneously measured EEG and MEG data, and investigated whether the MEG responses to passive hand movements could be used to train a classifier to detect MI. MEG yielded altogether slightly, but not significantly, better results than EEG. Training the classifiers with subject’s own or other subjects’ passive movements did not result in high accuracy, which indicates that passive movements should not be used for calibrating an MI-BCI. The methods presented in this thesis are suitable for a real-time MEG-based BCI. The decoding results can be used as a benchmark when developing other classifiers specifically for motor imagery -related MEG data.Aivo-tietokone -käyttöliittymät (brain–computer interface; BCI) ovat viime aikoina herättäneet kiinnostusta niin neurotieteen perustutkimuksessa kuin kliinisissä interventioissakin. Suurin osa ei-invasiivisista BCI:stä mittaa aivotoimintaa elektroenkefalografialla (EEG). EEG:n matala signaali-kohinasuhde ja huono avaruudellinen resoluutio kuitenkin hankaloittavat reaaliaikais-ta signaalianalyysia ja aivotoiminnan luokittelua. Nämä rajoitteet voidaan kiertää käyttämällä magnetoenkefalografiaa (MEG) vaihtoehtoisena mittausmenetelmänä. Tämän työn tavoitteena on kehittää käden liikkeen kuvittelua luokitteleva, MEG:hen perustuva BCI, jota voidaan myöhemmin käyttää terapeuttisena menetelmänä esimerkiksi aivoinfarktista toipuvien potilaiden kuntoutuk-sessa. Tutkimuksessa validoidaan terveillä koehenkilöillä tehtyjen MEG-mittausten perusteella koneoppimismenetelmiä, joilla luokitellaan liikkeen kuvittelun aiheuttamaa aivotoimintaa. Ensimmäisessä osatyössä (Tutkimus I) vertailtiin piirteenirrotusmenetelmiä, joita käytetään erottamaan toisistaan vasemman ja oikean käden kuvittelu sekä liikkeen kuvittelu ja lepotila. Ha-vaittiin, että avaruudellisesti suodatettujen signaalien taajuuskaistan teho luokittelupiirteenä tuotti parempia luokittelutarkkuuksia kuin parietaalisista gradiometreistä mitatut aika-taajuuspiirteet. Lisäksi edeltävä avaruudellinen suodatus paransi aika-taajuuspiirteiden erottelukykyä luokittelu-tehtävissä.BCI:n opetusdata kerätään yleensä kunkin mittauskerran alussa. Koska tämä voi kuitenkin olla aikaavievää ja uuvuttavaa koehenkilölle, opetusdatana voidaan käyttää myös muilta koehenkilöiltä kerättyjä mittaussignaaleja. Toisessa osatyössä (Tutkimus II) vertailtiin koehenkilöiden väliseen luo-kitteluun soveltuvia menetelmiä. Tulosten perusteella monitehtäväoppimista ja l2,1-regularisoitua logistista regressiota käyttävä luokittelija oli paras menetelmä koehenkilöiden väliseen luokitteluun sekä MEG:llä että EEG:llä. Toisessa osatyössä vertailtiin myös samanaikaisesti mitattujen MEG:n ja EEG:n tuottamia luokit-telutuloksia, sekä tutkittiin voidaanko passiivisten kädenliikkeiden aikaansaamia MEG-vasteita käyttää liikkeen kuvittelua tunnistavien luokittelijoiden opetukseen. MEG tuotti hieman, muttei merkittävästi, parempia tuloksia kuin EEG. Luokittelijoiden opetus koehenkilöiden omilla tai mui-den koehenkilöiden passiiviliikkeillä ei tuottanut hyviä luokittelutarkkuuksia, mikä osoittaa että passiiviliikkeitä ei tulisi käyttää liikkeen kuvittelua tunnistavan BCI:n kalibrointiin. Työssä esitettyjä menetelmiä voidaan käyttää reaaliaikaisessa MEG-BCI:ssä. Luokittelutuloksia voidaan käyttää vertailukohtana kehitettäessä muita liikkeen kuvitteluun liittyvän MEG-datan luokittelijoita

    Electroencephalograph (EEG) signal processing techniques for motor imagery Brain Computer interface systems

    Get PDF
    Brain-Computer Interface (BCI) system provides a channel for the brain to control external devices using electrical activities of the brain without using the peripheral nervous system. These BCI systems are being used in various medical applications, for example controlling a wheelchair and neuroprosthesis devices for the disabled, thereby assisting them in activities of daily living. People suffering from Amyotrophic Lateral Sclerosis (ALS), Multiple Sclerosis and completely locked in are unable to perform any body movements because of the damage of the peripheral nervous system, but their cognitive function is still intact. BCIs operate external devices by acquiring brain signals and converting them to control commands to operate external devices. Motor-imagery (MI) based BCI systems, in particular, are based on the sensory-motor rhythms which are generated by the imagination of body limbs. These signals can be decoded as control commands in BCI application. Electroencephalogram (EEG) is commonly used for BCI applications because it is non-invasive. The main challenges of decoding the EEG signal are because it is non-stationary and has a low spatial resolution. The common spatial pattern algorithm is considered to be the most effective technique for discrimination of spatial filter but is easily affected by the presence of outliers. Therefore, a robust algorithm is required for extraction of discriminative features from the motor imagery EEG signals. This thesis mainly aims in developing robust spatial filtering criteria which are effective for classification of MI movements. We have proposed two approaches for the robust classification of MI movements. The first approach is for the classification of multiclass MI movements based on the thinICA (Independent Component Analysis) and mCSP (multiclass Common Spatial Pattern Filter) method. The observed results indicate that these approaches can be a step towards the development of robust feature extraction for MI-based BCI system. The main contribution of the thesis is the second criterion, which is based on Alpha- Beta logarithmic-determinant divergence for the classification of two class MI movements. A detailed study has been done by obtaining a link between the AB log det divergence and CSP criterion. We propose a scaling parameter to enable a similar way for selecting the respective filters like the CSP algorithm. Additionally, the optimization of the gradient of AB log-det divergence for this application was also performed. The Sub-ABLD (Subspace Alpha-Beta Log-Det divergence) algorithm is proposed for the discrimination of two class MI movements. The robustness of this algorithm is tested with both the simulated and real data from BCI competition dataset. Finally, the resulting performances of the proposed algorithms have been favorably compared with other existing algorithms

    Convolutional Neural Network for Functional Near-Infrared Spectroscopy-Based Brain-Computer Interface

    Get PDF
    Brain-computer interface (BCI) is a communication system that translates the brain signal directly to a computer or external devices. It is a promising solution for the patients with neurological disorders as the system is able to restore the movement ability. Various neuroimaging modalities have been utilized for brain signal acquisition, however, functional near-infrared spectroscopy (fNIRS) provides many advantages over other modalities. Hence, it has gained attention for implementing in BCI system. For developing BCI system, the appropriate machine learning algorithm and discriminating features from the hemodynamic response signal are desired, as the previous studies have reported the performance enhancement of fNIRS-based BCI in terms of classification accuracy by focusing on the classifier as well as signal features. The aim of this thesis is to improve the classification accuracy in fNIRS-based BCI by classifying and extracting feature automatically. The convolutional neural network (CNN) was applied owing to the automatic feature extractor and classifier instead of manual feature extraction in the conventional methods. In the experiment, four healthy subjects were measured the hemodynamic response signal evoked by performing tasks including rest, right and left hand motor executions. The conventional methods of fNIRS-based BCI using signal mean, slope, peak, variance, skewness, and kurtosis as the features, and support vector machine (SVM) and artificial neural network (ANN) as the classification methods were compared with CNN-based method. The results show the improvement of classification accuracy of CNN-based method over SVM-based and ANN-based method 6.92% and 3.75%, respectively. The main contributions of this thesis are (1) the promising feature extraction and classification method for fNIRS-based BCI using CNN and (2) the analysis of the feature extracted by conventional methods and convolutional filter of the CNN. ⓒ 2017 DGISTprohibitionI. INTRODUCTION 1-- 1. Motivation 1-- 2. Objective 2-- II. BACKGROUND AND RELATEDWORK 4-- 1. Functional Near-Infrared Spectroscopy (fNIRS) 4-- 2. fNIRS-based BCI 5-- 3. Feature Extraction and Classification 6-- 3.1 Feature Extraction 6-- 3.2 Support Vector Machine (SVM) 7-- 3.3 Artificial Neural Network (ANN) 7-- 3.4 Convolutional Neural Network (CNN) 9-- 4. Evaluation 11-- III. METHOD 12-- 1. Participants 12-- 2. Data Acquisition 12-- 3. Experimental Procedure 12-- 4. Preprocessing 13-- 4.1 Concentration Changes of Hemoglobin 13-- 4.2 Filtering 14-- 5. Feature Extraction and Classification 16-- 5.1 Conventional Method 16-- 5.2 Proposed Structures of CNN 17-- 6. Feature Visualization 20-- IV. RESULTS AND DISCUSSIONS 23-- 1. Measured Hemodynamic Responses 23-- 2. Classification Accuracy 24-- 3. Feature Visualization 26-- 4. Future Work 28-- V. CONCLUSION 30-- References 31-- Acknowledgments 38-- Curriculum Vitae 39MasterdCollectio

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Electroencephalographic Signal Processing and Classification Techniques for Noninvasive Motor Imagery Based Brain Computer Interface

    Get PDF
    In motor imagery (MI) based brain-computer interface (BCI), success depends on reliable processing of the noisy, non-linear, and non-stationary brain activity signals for extraction of features and effective classification of MI activity as well as translation to the corresponding intended actions. In this study, signal processing and classification techniques are presented for electroencephalogram (EEG) signals for motor imagery based brain-computer interface. EEG signals have been acquired placing the electrodes following the international 10-20 system. The acquired signals have been pre-processed removing artifacts using empirical mode decomposition (EMD) and two extended versions of EMD, ensemble empirical mode decomposition (EEMD), and multivariate empirical mode decomposition (MEMD) leading to better signal to noise ratio (SNR) and reduced mean square error (MSE) compared to independent component analysis (ICA). EEG signals have been decomposed into independent mode function (IMFs) that are further processed to extract features like sample entropy (SampEn) and band power (BP). The extracted features have been used in support vector machines to characterize and identify MI activities. EMD and its variants, EEMD, MEMD have been compared with common spatial pattern (CSP) for different MI activities. SNR values from EMD, EEMD and MEMD (4.3, 7.64, 10.62) are much better than ICA (2.1) but accuracy of MI activity identification is slightly better for ICA than EMD using BP and SampEn. Further work is outlined to include more features with larger database for better classification accuracy
    corecore