1,039 research outputs found
Heterogeneous data fusion for brain psychology applications
This thesis aims to apply Empirical Mode Decomposition (EMD), Multiscale Entropy
(MSE), and collaborative adaptive filters for the monitoring of different brain
consciousness states. Both block based and online approaches are investigated, and
a possible extension to the monitoring and identification of Electromyograph (EMG)
states is provided.
Firstly, EMD is employed as a multiscale time-frequency data driven tool to
decompose a signal into a number of band-limited oscillatory components; its data
driven nature makes EMD an ideal candidate for the analysis of nonlinear and non-stationary
data. This methodology is further extended to process multichannel real
world data, by making use of recent theoretical advances in complex and multivariate
EMD. It is shown that this can be used to robustly measure higher order features
in multichannel recordings to robustly indicate ‘QBD’. In the next stage, analysis is
performed in an information theory setting on multiple scales in time, using MSE.
This enables an insight into the complexity of real world recordings. The results of
the MSE analysis and the corresponding statistical analysis show a clear difference
in MSE between the patients in different brain consciousness states. Finally, an
online method for the assessment of the underlying signal nature is studied. This
method is based on a collaborative adaptive filtering approach, and is shown to be
able to approximately quantify the degree of signal nonlinearity, sparsity, and non-circularity
relative to the constituent subfilters. To further illustrate the usefulness
of the proposed data driven multiscale signal processing methodology, the final case
study considers a human-robot interface based on a multichannel EMG analysis.
A preliminary analysis shows that the same methodology as that applied to the
analysis of brain cognitive states gives robust and accurate results.
The analysis, simulations, and the scope of applications presented suggest
great potential of the proposed multiscale data processing framework for feature extraction
in multichannel data analysis. Directions for future work include further development
of real-time feature map approaches and their use across brain-computer
and brain-machine interface applications
Modelling of brain consciousness based on collaborative adaptive filters
A novel method for the discrimination between discrete states of brain consciousness is proposed, achieved through examination of nonlinear features within the electroencephalogram (EEG). To allow for real time modes of operation, a collaborative adaptive filtering architecture, using a convex combination of adaptive filters is implemented. The evolution of the mixing parameter within this structure is then used as an indication of the predominant nature of the EEG recordings. Simulations based upon a number of different filter combinations illustrate the suitability of this approach to differentiate between the coma and quasi-brain-death states based upon fundamental signal characteristics
Collaborative adaptive filtering for machine learning
Quantitative performance criteria for the analysis of machine learning architectures
and algorithms have long been established. However, qualitative performance criteria,
which identify fundamental signal properties and ensure any processing preserves the
desired properties, are still emerging. In many cases, whilst offline statistical tests
exist such as assessment of nonlinearity or stochasticity, online tests which not only
characterise but also track changes in the nature of the signal are lacking. To that end,
by employing recent developments in signal characterisation, criteria are derived for
the assessment of the changes in the nature of the processed signal.
Through the fusion of the outputs of adaptive filters a single collaborative hybrid
filter is produced. By tracking the dynamics of the mixing parameter of this filter,
rather than the actual filter performance, a clear indication as to the current nature of
the signal is given. Implementations of the proposed method show that it is possible to
quantify the degree of nonlinearity within both real- and complex-valued data. This is
then extended (in the real domain) from dealing with nonlinearity in general, to a more
specific example, namely sparsity. Extensions of adaptive filters from the real to the
complex domain are non-trivial and the differences between the statistics in the real
and complex domains need to be taken into account. In terms of signal characteristics,
nonlinearity can be both split- and fully-complex and complex-valued data can be
considered circular or noncircular. Furthermore, by combining the information obtained
from hybrid filters of different natures it is possible to use this method to gain a more
complete understanding of the nature of the nonlinearity within a signal. This also
paves the way for building multidimensional feature spaces and their application in
data/information fusion.
To produce online tests for sparsity, adaptive filters for sparse environments are
investigated and a unifying framework for the derivation of proportionate normalised
least mean square (PNLMS) algorithms is presented. This is then extended to derive
variants with an adaptive step-size. In order to create an online test for noncircularity,
a study of widely linear autoregressive modelling is presented, from which a proof of
the convergence of the test for noncircularity can be given. Applications of this method
are illustrated on examples such as biomedical signals, speech and wind data
Electroencephalogram data platform for application of reduction methods
Long-term electroencephalogram (EEG) monitoring (≥24-h) is a resourceful tool for properly diagnosis sparse life-threatening events like non-convulsive seizures and status epilepticus in Intensive Care Unit (ICU) inpatients. Such EEG data requires objective methods for data reduction, transmission and analysis. This work aims to assess specificity and sensibility of HaEEG and aEEG methods in combination with conventional multichannel EEG when achieving seizure detection. A database architecture was designed to handle the interoperability, processing, and analysis of EEG data. Using data from CHB-MIT public EEG database, the reduced signal was obtained by EEG envelope segmentation, with 10 and 90 percentiles obtained for each segment. The use of asymmetrical filtering (2-15 Hz) and overall clinical band (1-70 Hz) was compared. The upper and lower margins of compressed segments were used to classify ictal and non-ictal epochs. Such classification was compared with the corresponding specialist seizure annotation for each patient. The difference between medians of instantaneous frequencies of ictal and non-ictal periods were assessed using Wilcoxon Rank Sum Test, which was significant for signals filtered from 2 to 15 Hz (p = 0.0055) but not for signals filtered from 1 to 70 Hz (p = 0.1816).O eletroencefalograma (EEG) de longa duração (≥24-h) em monitoramento contínuo é diferencial no diagnóstico e classificação de eventos epileptiformes, como crises não convulsivas e status epilepticus, em pacientes de Unidades de Tratamento Intensivo (UTI). Este exame requer métodos objetivos de análise, redução e transmissão de dados. O objetivo desse trabalho é avaliar a especificidade e a sensibilidade dos métodos HaEEG e aEEG em combinação com EEG multicanal convencional na detecção de eventos epileptiformes. Uma arquitetura de integração de dados foi projetada para gerir o armazenamento, processamento e análise de dados de EEG. Foram utilizados dados do banco de dados de EEG público do CHB-MIT. O sinal reduzido foi obtido pela segmentação do envelope do EEG, com percentis 10 e 90 obtidos para cada segmento. A aplicação do filtro assimétrico (2-15 Hz) e em bandas clínicas (1-70 Hz) foi comparada. Os limiares superiores e inferiores dos segmentos do aEEG e HaEEG foram usados para classificar épocas ictais e não ictais. A classificação foi comparada com as anotações feitas por um especialista para cada paciente. As medianas das frequências instantâneas para períodos ictais e não ictais foram analisadas com Wilcoxon Rank Sum Test com significância para filtragem assimétrica (p = 0,0055), mas não nas bandas clínicas (p = 0,1816)
Comparative analysis of TMS-EEG signal using different approaches in healthy subjects
openThe integration of transcranial magnetic stimulation with electroencephalography (TMS-EEG) represents a useful non-invasive approach to assess cortical excitability, plasticity and intra-cortical connectivity in humans in physiological and pathological conditions.
However, biological and environmental noise sources can contaminate the TMS-evoked potentials (TEPs). Therefore, signal preprocessing represents a fundamental step in the analysis of these potentials and is critical to remove artefactual components while preserving the physiological brain activity.
The objective of the present study is to evaluate the effects of different signal processing pipelines, (namely Leodori et al., Rogasch et al., Mutanen et al.) applied on TEPs recorded in five healthy volunteers after TMS stimulation of the primary motor cortex (M1) of the dominant hemisphere. These pipelines were used and compared to remove artifacts and improve the quality of the recorded signals, laying the foundation for subsequent analyses. Various algorithms, such as Independent Component Analysis (ICA), SOUND, and SSP-SIR, were used in each pipeline.
Furthermore, after signal preprocessing, current localization was performed to map the TMS-induced neural activation in the cortex. This methodology provided valuable information on the spatial distribution of activity and further validated the effectiveness of the signal cleaning pipelines.
Comparing the effects of the different pipelines on the same dataset, we observed considerable variability in how the pipelines affect various signal characteristics. We observed significant differences in the effects on signal amplitude and in the identification and characterisation of peaks of interest, i.e., P30, N45, P60, N100, P180. The identification and characteristics of these peaks showed variability, especially with regard to the early peaks, which reflect the cortical excitability of the stimulated area and are the more affected by biological and stimulation-related artifacts.
Despite these differences, the topographies and source localisation, which are the most informative and useful in reconstructing signal dynamics, were consistent and reliable between the different pipelines considered.
The results suggest that the existing methodologies for analysing TEPs produce different effects on the data, but are all capable of reproducing the dynamics of the signal and its components. Future studies evaluating different signal preprocessing methods in larger populations are needed to determine an appropriate workflow that can be shared through the scientific community, in order to make the results obtained in different centres comparable
C-Trend parameters and possibilities of federated learning
Abstract. In this observational study, federated learning, a cutting-edge approach to machine learning, was applied to one of the parameters provided by C-Trend Technology developed by Cerenion Oy. The aim was to compare the performance of federated learning to that of conventional machine learning. Additionally, the potential of federated learning for resolving the privacy concerns that prevent machine learning from realizing its full potential in the medical field was explored.
Federated learning was applied to burst-suppression ratio’s machine learning and it was compared to the conventional machine learning of burst-suppression ratio calculated on the same dataset. A suitable aggregation method was developed and used in the updating of the global model. The performance metrics were compared and a descriptive analysis including box plots and histograms was conducted.
As anticipated, towards the end of the training, federated learning’s performance was able to approach that of conventional machine learning. The strategy can be regarded to be valid because the performance metric values remained below the set test criterion levels. With this strategy, we will potentially be able to make use of data that would normally be kept confidential and, as we gain access to more data, eventually develop machine learning models that perform better.
Federated learning has some great advantages and utilizing it in the context of qEEGs’ machine learning could potentially lead to models, which reach better performance by receiving data from multiple institutions without the difficulties of privacy restrictions. Some possible future directions include an implementation on heterogeneous data and on larger data volume.C-Trend-teknologian parametrit ja federoidun oppimisen mahdollisuudet. Tiivistelmä. Tässä havainnointitutkimuksessa federoitua oppimista, koneoppimisen huippuluokan lähestymistapaa, sovellettiin yhteen Cerenion Oy:n kehittämään C-Trend-teknologian tarjoamaan parametriin. Tavoitteena oli verrata federoidun oppimisen suorituskykyä perinteisen koneoppimisen suorituskykyyn. Lisäksi tutkittiin federoidun oppimisen mahdollisuuksia ratkaista yksityisyyden suojaan liittyviä rajoitteita, jotka estävät koneoppimista hyödyntämästä täyttä potentiaaliaan lääketieteen alalla.
Federoitua oppimista sovellettiin purskevaimentumasuhteen koneoppimiseen ja sitä verrattiin purskevaimentumasuhteen laskemiseen, johon käytettiin perinteistä koneoppimista. Kummankin laskentaan käytettiin samaa dataa. Sopiva aggregointimenetelmä kehitettiin, jota käytettiin globaalin mallin päivittämisessä. Suorituskykymittareiden tuloksia verrattiin keskenään ja tehtiin kuvaileva analyysi, johon sisältyi laatikkokuvioita ja histogrammeja.
Odotetusti opetuksen loppupuolella federoidun oppimisen suorituskyky pystyi lähestymään perinteisen koneoppimisen suorituskykyä. Menetelmää voidaan pitää pätevänä, koska suorituskykymittarin arvot pysyivät alle asetettujen testikriteerien tasojen. Tämän menetelmän avulla voimme ehkä hyödyntää dataa, joka normaalisti pidettäisiin salassa, ja kun saamme lisää dataa käyttöömme, voimme lopulta kehittää koneoppimismalleja, jotka saavuttavat paremman suorituskyvyn.
Federoidulla oppimisella on joitakin suuria etuja, ja sen hyödyntäminen qEEG:n koneoppimisen yhteydessä voisi mahdollisesti johtaa malleihin, jotka saavuttavat paremman suorituskyvyn saamalla tietoja useista eri lähteistä ilman yksityisyyden suojaan liittyviä rajoituksia. Joitakin mahdollisia tulevia suuntauksia ovat muun muassa heterogeenisen datan ja suurempien tietomäärien käyttö
Interpretable Machine Learning for Electro-encephalography
While behavioral, genetic and psychological markers can provide important information about brain health, research in that area over the last decades has much focused on imaging devices such as magnetic resonance tomography (MRI) to provide non-invasive information about cognitive processes. Unfortunately, MRI based approaches, able to capture the slow changes in blood oxygenation levels, cannot capture electrical brain activity which plays out on a time scale up to three orders of magnitude faster. Electroencephalography (EEG), which has been available in clinical settings for over 60 years, is able to measure brain activity based on rapidly changing electrical potentials measured non-invasively on the scalp. Compared to MRI based research into neurodegeneration, EEG based research has, over the last decade, received much less interest from the machine learning community. But generally, EEG in combination with sophisticated machine learning offers great potential such that neglecting this source of information, compared to MRI or genetics, is not warranted. In collaborating with clinical experts, the ability to link any results provided by machine learning to the existing body of research is especially important as it ultimately provides an intuitive or interpretable understanding. Here, interpretable means the possibility for medical experts to translate the insights provided by a statistical model into a working hypothesis relating to brain function. To this end, we propose in our first contribution a method allowing for ultra-sparse regression which is applied on EEG data in order to identify a small subset of important diagnostic markers highlighting the main differences between healthy brains and brains affected by Parkinson's disease. Our second contribution builds on the idea that in Parkinson's disease impaired functioning of the thalamus causes changes in the complexity of the EEG waveforms. The thalamus is a small region in the center of the brain affected early in the course of the disease. Furthermore, it is believed that the thalamus functions as a pacemaker - akin to a conductor of an orchestra - such that changes in complexity are expressed and quantifiable based on EEG. We use these changes in complexity to show their association with future cognitive decline. In our third contribution we propose an extension of archetypal analysis embedded into a deep neural network. This generative version of archetypal analysis allows to learn an appropriate representation where every sample of a data set can be decomposed into a weighted sum of extreme representatives, the so-called archetypes. This opens up an interesting possibility of interpreting a data set relative to its most extreme representatives. In contrast, clustering algorithms describe a data set relative to its most average representatives. For Parkinson's disease, we show based on deep archetypal analysis, that healthy brains produce archetypes which are different from those produced by brains affected by neurodegeneration
Toward Super-Creativity
What is super creativity? From the simple creation of a meal to the most sophisticated artificial intelligence system, the human brain is capable of responding to the most diverse challenges and problems in increasingly creative and innovative ways. This book is an attempt to define super creativity by examining creativity in humans, machines, and human-machine interactions. Organized into three sections, the volume covers such topics as increasing personal creativity, the impact of artificial intelligence and digital devices, and the interaction of humans and machines in fields such as healthcare and economics
Multivariate multiscale complexity analysis
Established dynamical complexity analysis measures operate at a single scale and thus fail
to quantify inherent long-range correlations in real world data, a key feature of complex
systems. They are designed for scalar time series, however, multivariate observations are
common in modern real world scenarios and their simultaneous analysis is a prerequisite for
the understanding of the underlying signal generating model. To that end, this thesis first
introduces a notion of multivariate sample entropy and thus extends the current univariate
complexity analysis to the multivariate case. The proposed multivariate multiscale entropy
(MMSE) algorithm is shown to be capable of addressing the dynamical complexity of such
data directly in the domain where they reside, and at multiple temporal scales, thus
making full use of all the available information, both within and across the multiple data
channels. Next, the intrinsic multivariate scales of the input data are generated adaptively
via the multivariate empirical mode decomposition (MEMD) algorithm. This allows for
both generating comparable scales from multiple data channels, and for temporal scales
of same length as the length of input signal, thus, removing the critical limitation on
input data length in current complexity analysis methods. The resulting MEMD-enhanced
MMSE method is also shown to be suitable for non-stationary multivariate data analysis
owing to the data-driven nature of MEMD algorithm, as non-stationarity is the biggest
obstacle for meaningful complexity analysis. This thesis presents a quantum step forward
in this area, by introducing robust and physically meaningful complexity estimates of
real-world systems, which are typically multivariate, finite in duration, and of noisy and
heterogeneous natures. This also allows us to gain better understanding of the complexity
of the underlying multivariate model and more degrees of freedom and rigor in the analysis.
Simulations on both synthetic and real world multivariate data sets support the analysis
- …