22 research outputs found

    Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration

    Get PDF
    One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy

    Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration

    Get PDF
    One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy

    Using novel stimuli and alternative signal processing techniques to enhance BCI paradigms

    Get PDF
    A Brain-Computer Interface (BCI) is a device that uses the brain activity of a person as an input to select desired outputs on a computer. BCIs that use surface electroencephalogram (EEG) recordings as their input are the least invasive but also suffer from a very low signal-to-noise ratio (SNR) due to the very low amplitude of the person’s brain activity and the presence of many signal artefacts and background noise. This can be compensated for by subjecting the signals to extensive signal processing, and by using stimuli to trigger a large but consistent change in the signal – these changes are called evoked potentials. The method used to stimulate the evoked potential, and introduce an element of conscious selection in order to allow the user’s intent to modify the evoked potential produced, is called the BCI paradigm. However, even with these additions the performance of BCIs used for assistive communication and control is still significantly below that of other assistive solutions, such as keypads or eye-tracking devices. This thesis examines the paradigm and signal processing components of BCIs and puts forward several methods meant to enhance BCIs’ performance and efficiency. Firstly, two novel signal processing methods based on Empirical Mode Decomposition (EMD) were developed and evaluated. EMD is a technique that divides any oscillating signal into groups of frequency harmonics, called Intrinsic Mode Functions (IMFs). Furthermore, by using Takens’ theorem, a single channel of EEG can be converted into a multi-temporal channel signal by transforming the channel into multiple snapshots of its signal content in time using a series of delay vectors. This signal can then be decomposed into IMFs using a multi-channel variation of EMD, called Multi-variate EMD (MEMD), which uses the spatial information from the signal’s neighbouring channels to inform its decomposition. In the case of a multi-temporal channel signal, this allows the temporal dynamics of the signal to be incorporated into the IMFs. This is called Temporal MEMD (T-MEMD). The second signal processing method based on EMD decomposed both the spatial and temporal channels simultaneously, allowing both spatial and temporal dynamics to be incorporated into the resulting IMFs. This is called Spatio-temporal MEMD (ST-MEMD). Both methods were applied to a large pre-recorded Motor Imagery BCI dataset along with EMD and MEMD for comparison. These results were also compared to those from other studies in the literature that had used the same dataset. T-MEMD performed with an average classification accuracy of 70.2%, performing on a par with EMD that had an average classification accuracy of 68.9%. Both ST-MEMD and MEMD outperformed them with ST-MEMD having an average classification accuracy of 73.6%, and MEMD having an average classification accuracy of 75.3%. The methods containing spatial dynamics, i.e. MEMD and ST-MEMD, outperformed those with only temporal dynamics, i.e. EMD and T-MEMD. The two methods with temporal dynamics each performed on a par with the non-temporal method that had the same level of spatial dynamics. This shows that only the presence of spatial dynamics resulted in a performance increase. This was concluded to be because the differences between the classes of motor-imagery are inherently spatial in nature, not temporal. Next a novel BCI paradigm was developed based on the standard Steady-state Somatosensory Evoked Potential (SSSEP) BCI paradigm. This paradigm uses a tactile stimulus applied to the skin at a certain frequency, generating a resonance signal in the brain’s activity. If two stimuli of different frequency are applied, two resonance signals will be present. However, if the user attends one stimulus over the other, its corresponding SSSEP will increase in amplitude. Unfortunately these changes in amplitude can be very minute. To counter this, a stimulus amplitude and frequency of the vibrotactile stimuli. It was hypothesised that if the stimuli generator was constructed that could alter the were of the same frequency, but one’s amplitude was just below the user’s conscious level of perception and the other was above it, the changes in the SSSEP between classes would be the same as those between an SSSEP being generated and neutral EEG, with differences in α activity between the low-amplitude SSSEP and neutral activity due to the differences in the user’s level of concentration from attending the low-amplitude stimulus. The novel SSSEP BCI paradigm performed on a par with the standard paradigm with an average 61.8% classification accuracy over 16 participants, compared to an average 63.3% classification accuracy respectively, indicating that the hypothesis was false. However, the large presence of electro-magnetic interference (EMI) in the EEG recordings may have compromised the data. Many different noise suppression methods were applied to the stimulus device and the data, and whilst the EMI artefacts were reduced in magnitude they were not eliminated completely. Even with the noise the standard SSSEP stimulus paradigm performed on a par with studies that used the same paradigm, indicating that the results may not have been invalidated by the EMI. Overall the thesis shows that motor-imagery signals are inherently spatial in difference, and that the novel methods of T-MEMD and ST-MEMD may yet out-perform the existing methods of EMD and MEMD if applied to signals that are temporal in nature, such as functional Magnetic Resonance Imaging (fMRI). Whilst the novel SSSEP paradigm did not result in an increase in performance, it highlighted the impact of EMI from stimulus equipment on EEG recordings and potentially confirmed that the amplitude of SSEP stimuli is a minor factor in a BCI paradigm

    Decision-based data fusion of complementary features for the early diagnosis of Alzheimer\u27s disease

    Get PDF
    As the average life expectancy increases, particularly in developing countries, the prevalence of Alzheimer\u27s disease (AD), which is the most common form of dementia worldwide, has increased dramatically. As there is no cure to stop or reverse the effects of AD, the early diagnosis and detection is of utmost concern. Recent pharmacological advances have shown the ability to slow the progression of AD; however, the efficacy of these treatments is dependent on the ability to detect the disease at the earliest stage possible. Many patients are limited to small community clinics, by geographic and/or financial constraints. Making diagnosis possible at these clinics through an accurate, inexpensive, and noninvasive tool is of great interest. Many tools have been shown to be effective at the early diagnosis of AD. Three in particular are focused upon in this study: event-related potentials (ERPs) in electroencephalogram (EEG) recordings, magnetic resonance imaging (MRI), as well as positron emission tomography (PET). These biomarkers have been shown to contain diagnostically useful information regarding the development of AD in an individual. The combination of these biomarkers, if they provide complementary information, can boost overall diagnostic accuracy of an automated system. EEG data acquired from an auditory oddball paradigm, along with volumetric T2 weighted MRI data and PET imagery representative of metabolic glucose activity in the brain was collected from a cohort of 447 patients, along with other biomarkers and metrics relating to neurodegenerative disease. This study in particular focuses on AD versus control diagnostic ability from the cohort, in addition to AD severity analysis. An assortment of feature extraction methods were employed to extract diagnostically relevant information from raw data. EEG signals were decomposed into frequency bands of interest hrough the discrete wavelet transform (DWT). MRI images were reprocessed to provide volumetric representations of specific regions of interest in the cranium. The PET imagery was segmented into regions of interest representing glucose metabolic rates within the brain. Multi-layer perceptron neural networks were used as the base classifier for the augmented stacked generalization algorithm, creating three overall biomarker experts for AD diagnosis. The features extracted from each biomarker were used to train classifiers on various subsets of the cohort data; the decisions from these classifiers were then combined to achieve decision-based data fusion. This study found that EEG, MRI and PET data each hold complementary information for the diagnosis of AD. The use of all three in tandem provides greater diagnostic accuracy than using any single biomarker alone. The highest accuracy obtained through the EEG expert was 86.1 ±3.2%, with MRI and PET reaching 91.1 +3.2% and 91.2 ±3.9%, respectively. The maximum diagnostic accuracy of these systems averaged 95.0 ±3.1% when all three biomarkers were combined through the decision fusion algorithm described in this study. The severity analysis for AD showed similar results, with combination performance exceeding that of any biomarker expert alone

    Ensemble of classifiers based data fusion of EEG and MRI for diagnosis of neurodegenerative disorders

    Get PDF
    The prevalence of Alzheimer\u27s disease (AD), Parkinson\u27s disease (PD), and mild cognitive impairment (MCI) are rising at an alarming rate as the average age of the population increases, especially in developing nations. The efficacy of the new medical treatments critically depends on the ability to diagnose these diseases at the earliest stages. To facilitate the availability of early diagnosis in community hospitals, an accurate, inexpensive, and noninvasive diagnostic tool must be made available. As biomarkers, the event related potentials (ERP) of the electroencephalogram (EEG) - which has previously shown promise in automated diagnosis - in addition to volumetric magnetic resonance imaging (MRI), are relatively low cost and readily available tools that can be used as an automated diagnosis tool. 16-electrode EEG data were collected from 175 subjects afflicted with Alzheimer\u27s disease, Parkinson\u27s disease, mild cognitive impairment, as well as non-disease (normal control) subjects. T2 weighted MRI volumetric data were also collected from 161 of these subjects. Feature extraction methods were used to separate diagnostic information from the raw data. The EEG signals were decomposed using the discrete wavelet transform in order to isolate informative frequency bands. The MR images were processed through segmentation software to provide volumetric data of various brain regions in order to quantize potential brain tissue atrophy. Both of these data sources were utilized in a pattern recognition based classification algorithm to serve as a diagnostic tool for Alzheimer\u27s and Parkinson\u27s disease. Support vector machine and multilayer perceptron classifiers were used to create a classification algorithm trained with the EEG and MRI data. Extracted features were used to train individual classifiers, each learning a particular subset of the training data, whose decisions were combined using decision level fusion. Additionally, a severity analysis was performed to diagnose between various stages of AD as well as a cognitively normal state. The study found that EEG and MRI data hold complimentary information for the diagnosis of AD as well as PD. The use of both data types with a decision level fusion improves diagnostic accuracy over the diagnostic accuracy of each individual data source. In the case of AD only diagnosis, ERP data only provided a 78% diagnostic performance, MRI alone was 89% and ERP and MRI combined was 94%. For PD only diagnosis, ERP only performance was 67%, MRI only was 70%, and combined performance was 78%. MCI only diagnosis exhibited a similar effect with a 71% ERP performance, 82% MRI performance, and 85% combined performance. Diagnosis among three subject groups showed the same trend. For PD, AD, and normal diagnosis ERP only performance was 43%, MRI only was 66%, and combined performance was 71%. The severity analysis for mild AD, severe AD, and normal subjects showed the same combined effect

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory

    Probabilistic graphical models for brain computer interfaces

    Get PDF
    Brain computer interfaces (BCI) are systems that aim to establish a new communication path for subjects who su er from motor disabilities, allowing interaction with the environment through computer systems. BCIs make use of a diverse group of physiological phenomena recorded using electrodes placed on the scalp (Electroencephalography, EEG) or electrodes placed directly over the brain cortex (Electrocorticography, ECoG). One commonly used phenomenon is the activity observed in specific areas of the brain in response to external events, called Event Related Potentials (ERP). Among those, a type of response called P300 is the most used phenomenon. The P300 has found application in spellers that make use of the brain's response to the presentation of a sequence of visual stimuli. Another commonly used phenomenon is the synchronization or desynchronization of brain rhythms during the execution or imagination of a motor task, which can be used to differentiate between two or more subject intentions. In the most basic scenario, a BCI system calculates the differences in the power of the EEG rhythms during execution of different tasks. Based on those differences, the BCI decides which task has been executed (e.g., motor imagination of left or right hand). Current approaches are mainly based on machine learning techniques that learn the distribution of the power values of the brain signals for each of the possible classes. In this thesis, making use of EEG and ECoG recording methods, we propose the use of probabilistic graphical models for brain computer interfaces. In the case of ERPs, in particular P300-based spellers, we propose the incorporation of language models at the level of words to increase significantly the performance of the spelling system. The proposed framework allows also the incorporation of different methods that take into account language models based on n-grams, all of this in an integrated structure whose parameters can be efficiently learned. In the context of execution or imagination of motor tasks, we propose techniques that take into account the temporal structure of the signals. Stochastic processes that model temporal dynamics of the brain signals in different frequency bands such as non-parametric Bayesian hidden Markov models are proposed in order to solve the problem of selection of the number of brain states during the execution of motor tasks as well as the selection of the number of components used to model the distribution of the brain signals. Following up on the same line of thought, hidden conditional random fields are proposed for classification of synchronous motor tasks. The combination of hidden states with the discriminative power of conditional random fields is shown to increase the classification performance of imaginary motor movements. In the context of asynchronous BCIs, we propose a method based on latent dynamic conditional random fields that is capable of modeling the internal temporal dynamics related to the generation of the brain signals, and external brain dynamics related to the execution of different mental tasks. Finally, in the context of asynchronous BCIs a model based on discriminative graphical models is presented for continuous classification of finger movements from ECoG data. We show that the incorporation of temporal dynamics of the brain signals in the classification stages increases significantly the classification accuracy of different mental states which can lead to a more effective interaction between the subject and the environment

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
    corecore