125 research outputs found

    Multi-Label/Multi-Class Deep Learning Classification of Spatiotemporal Data

    Get PDF
    Human senses allow for the detection of simultaneous changes in our environments. An unobstructed field of view allows us to notice concurrent variations in different parts of what we are looking at. For example, when playing a video game, a player, oftentimes, needs to be aware of what is happening in the entire scene. Likewise, our hearing makes us aware of various simultaneous sounds occurring around us. Human perception can be affected by the cognitive ability of the brain and acuity of the senses. This is not a factor with machines. As long as a system is given a signal and instructed how to analyze this signal and extract useful information, it will be able to complete this task repeatedly with enough processing power. Automated and simultaneous detection of activity in machine learning requires the use of multi-labels. In order to detect concurrent occurrences spatially, the labels should represent the regions of interest for a particular application. For example, in this thesis, the regions of interest will be either different quadrants of a parking lot as captured on surveillance videos, four auscultation sites on patients\u27 lungs, or the two sides of the brain\u27s motor cortex (left and right). Since the labels, within the multi-labels, will be used to represent not only certain spatial locations but also different levels or types of occurrences, a multi-class/multi-level schema is necessary. In the first study, each label is appointed one of three levels of activity within the specific quadrant. In the second study, each label is assigned one of four different types of respiratory sounds. In the third study, each label is designated one of three different finger tapping frequencies. This novel multi-labeling/multi-class schema is one part of being able to detect useful information in the data. The other part of the process lies in the machine learning algorithm, the network model. In order to be able to capture the spatiotemporal characteristics of the data, selecting Convolutional Neural Network and Long Short Term Memory Network-based algorithms as the basis of the network is fitting. The following classifications are described in this thesis: 1. In the first study, one of three different motion densities are identified simultaneously in four quadrants of two sets of surveillance videos. Publicly available video recordings are the spatiotemporal data. 2. In the second study, one of four types of breathing sounds are classified simultaneously in four auscultation sites. The spatiotemporal data are publicly available respiratory sound recordings. 3. In the third study, one of three finger tapping rates are detected simultaneously in two regions of interest, the right and left sides of the brain\u27s motor cortex. The spatiotemporal data are fNIRS channel readings gathered during an index finger tapping experiment. Classification results are based on testing data which is not part of model training and validation. The success of the results is based on measures of Hamming Loss and Subset Accuracy as well Accuracy, F-Score, Sensitivity, and Specificity metrics. In the last study, model explanation is performed using Shapley Additive Explanation (SHAP) values and plotting them on an image-like background, a representation of the fNIRS channel layout used as data input. Overall, promising findings support the use of this approach in classifying spatiotemporal data with the interest of detecting different levels or types of occurrences simultaneously in several regions of interest

    Towards simultaneous electroencephalography and functional near-infrared spectroscopy for improving diagnostic accuracy in prolonged disorders of consciousness: a healthy cohort study

    Get PDF
    Qualitative clinical assessments of the recovery of awareness after severe brain injury require an assessor to differentiate purposeful behaviour from spontaneous behaviour. As many such behaviours are minimal and inconsistent, behavioural assessments are susceptible to diagnostic errors. Advanced neuroimaging tools such as functional magnetic resonance imaging and electroencephalography (EEG) can bypass behavioural responsiveness and reveal evidence of covert awareness and cognition within the brains of some patients, thus providing a means for more accurate diagnoses, more accurate prognoses, and, in some instances, facilitated communication. As each individual neuroimaging method has its own advantages and disadvantages (e.g., signal resolution, accessibility, etc.), this thesis studies on healthy individuals a burgeoning technique of non-invasive electrical and optical neuroimaging—simultaneous EEG and functional near-infrared spectroscopy (fNIRS)—that can be applied at the bedside. Measuring reliable covert behaviours is correlated with participant engagement, instrumental sensitivity and the accurate localisation of responses, aspects which are further addressed over three studies. Experiment 1 quantifies the typical EEG changes in response to covert commands in the absence and presence of an object. This is investigated to determine whether a goal-directed task can yield greater EEG control accuracy over simple monotonous imagined single-joint actions. Experiment 2 characterises frequency domain NIRS changes in response to overt and covert hand movements. A method for reconstructing haemodynamics using the less frequently investigated phase parameter is outlined and the impact of noise contaminated NIRS measurements are discussed. Furthermore, classification performances between frequency-domain and continuous-wave-like signals are compared. Experiment 3 lastly applies these techniques to determine the potential of simultaneous EEG-fNIRS classification. Here a sparse channel montage that would ultimately favour clinical utility is used to demonstrate whether such a hybrid method containing rich spatial and temporal information can improve the classification of covert responses in comparison to unimodal classification of signals. The findings and discussions presented within this thesis identify a direction for future research in order to more accurately translate the brain state of patients with a prolonged disorder of consciousness

    Signal Processing Using Non-invasive Physiological Sensors

    Get PDF
    Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions

    Signal Processing Combined with Machine Learning for Biomedical Applications

    Get PDF
    The Master’s thesis is comprised of four projects in the realm of machine learning and signal processing. The abstract of the thesis is divided into four parts and presented as follows, Abstract 1: A Kullback-Leibler Divergence-Based Predictor for Inter-Subject Associative BCI. Inherent inter-subject variability in sensorimotor brain dynamics hinders the transferability of brain-computer interface (BCI) model parameters across subjects. An individual training session is essential for effective BCI control to compensate for variability. We report a Kullback-Leibler Divergence (KLD)-based predictor for inter-subject associative BCI. An online dataset comprising left/right hand, both feet, and tongue motor imagery tasks was used to show correlation between the proposed inter-subject predictor and BCI performance. Linear regression between the KLD predictor and BCI performance showed a strong inverse correlation (r = -0.62). The KLD predictor can act as an indicator for generalized inter-subject associative BCI designs. Abstract 2: Multiclass Sensorimotor BCI Based on Simultaneous EEG and fNIRS. Hybrid BCI (hBCI) utilizes multiple data modalities to acquire brain signals during motor execution (ME) tasks. Studies have shown significant enhancements in the classification of binary class ME-hBCIs; however, four-class ME-hBCI classification is yet to be done using multiclass algorithms. We present a quad-class classification of ME-hBCI tasks from simultaneous EEG-fNIRS recordings. Appropriate features were extracted from EEG-fNIRS signals and combined for hybrid features and classified with support vector machine. Results showed a significant increase in hybrid accuracy over single modalities and show hybrid method’s performance enhancement capability. Abstract 3: Deep Learning for Improved Inter-Subject EEG-fNIRS Hybrid BCI Performance. Multimodality based hybrid BCI has become famous for performance improvement; however, the inherent inter-subject and inter-session variation between participants brain dynamics poses obstacles in achieving high performance. This work presents an inter-subject hBCI to classify right/left-hand MI tasks from simultaneous EEG-fNIRS recordings of 29 healthy subjects. State-of-art features were extracted from EEG-fNIRS signals and combined for hybrid features, and finally, classified using deep Long short-term memory classifier. Results showed an increase in the inter-subject performance for the hybrid system while making the system more robust to brain dynamics change and hints to the feasibility of EEG-fNIRS based inter-subject hBCI. Abstract 4: Microwave Based Glucose Concentration Classification by Machine Learning. Non-invasive blood sugar measurement attracts increased attention in recent years, given the increase in diabetes-related complications and inconvenience in the traditional ways using blood. This work utilized machine learning (ML) algorithms to classify glucose concentration (GC) from the measured broadband microwave scattering signals (S11). An N-type microwave adapter pair was utilized to measure the sweeping frequency scattering-parameter (S-parameter) of the glucose solutions with GC varying from 50-10,000 dg/dL. Dielectric parameters were retrieved from the measured wideband complex S-parameters based on the modified Debye dielectric dispersion model. Results indicate that the best algorithm can achieve a perfect classification accuracy and suggests an alternate way to develop a GC detection method using ML algorithms

    Integrated Real-Time Control And Processing Systems For Multi-Channel Near-Infrared Spectroscopy Based Brain Computer Interfaces

    Get PDF
    This thesis outlines approaches to improve the signal processing and anal- ysis of Near-infrared spectroscopy (NIRS) based brain-computer interfaces (BCI). These approaches were developed in conjunction with the implemen- tation of a new customized exible multi-channel NIRS based BCI hardware system (Soraghan, 2010). Using a comparable functional imaging modality the assumptions on which NIRS-BCI have been reassessed, with regard to cognitive task selection, active area locations and lateralized motor cortex activation separability. This dissertation will also present methods that have been implemented to allow reduced hardware requirements in future NIRS-BCI development. We will also examine the sources of homeostatic physiological interference and present new approaches for analysis and at- tenuation within a real-time NIRS-BCI paradigm

    A Multifaceted Approach to Covert Attention Brain-Computer Interfaces

    Get PDF
    Over the last years, brain-computer interfaces (BCIs) have shown their value for assistive technology and neurorehabilitation. Recently, a BCI-approach for the rehabilitation of hemispatial neglect has been proposed on the basis of covert visuospatial attention (CVSA). CVSA is an internal action which can be described as shifting one's attention to the visual periphery without moving the actual point of gaze. Such attention shifts induce a lateralization in parietooccipital blood flow and oscillations in the so-called alpha band (8-14 Hz), which can be detected via electroencephalography (EEG), magnetoencephalography (MEG) or functional magnetic resonance imaging (fMRI). Previous studies have proven the technical feasibility of using CVSA as a control signal for BCIs, but unfortunately, these BCIs could not provide every subject with sufficient control. The aim of this thesis was to investigate the possibility of amplifying the weak lateralization patterns in the alpha band - the main reason behind insufficient CVSA BCI performance. To this end, I have explored three different approaches that could lead to better performing and more inclusive CVSA BCI systems. The first approach illuminated the changes in the behavior and brain patterns by closing the loop between subject and system with continuous real-time feedback at the instructed locus of attention. I could observe that even short (20 minutes) stretches of real-time feedback have an effect on behavioral correlates of attention, even when the changes observed in the EEG remained less conclusive. The second approach attempted to complement the information extracted fromthe EEG signal with another sensing modality that could provide additional information about the state of CVSA. For this reason, I firstly combined functional functional near-infrared spectroscopy (fNIRS) with EEG measurements. The results showed that, while the EEG was able to pick up the expected lateralization in the alpha band, the fNIRS was not able to reliably image changes in blood circulation in the parietooccipital cortex. Secondly, I successfully combined data from the EEG with measures of pupil size changes, induced by a high illumination contrast between the covertly attended target regions, which resulted in an improved BCI decoding performance. The third approach examined the option of using noninvasive electrical brain stimulation to boost the power of the alpha band oscillations and therefore render the lateralization pattern in the alpha band more visible compared to the background activity. However, I could not observe any impact of the stimulation on the ongoing alpha band power, and thus results of the subsequent effect on the lateralization remain inconclusive. Overall, these studies helped to further understand CVSA and lay out a useful basis for further exploration of the connection between behavior and alpha power oscillations in CVSA tasks, as well as for potential directions to improve CVSA-based BCIs

    Toward an Imagined Speech-Based Brain Computer Interface Using EEG Signals

    Get PDF
    Individuals with physical disabilities face difficulties in communication. A number of neuromuscular impairments could limit people from using available communication aids, because such aids require some degree of muscle movement. This makes brain–computer interfaces (BCIs) a potentially promising alternative communication technology for these people. Electroencephalographic (EEG) signals are commonly used in BCI systems to capture non-invasively the neural representations of intended, internal and imagined activities that are not physically or verbally evident. Examples include motor and speech imagery activities. Since 2006, researchers have become increasingly interested in classifying different types of imagined speech from EEG signals. However, the field still has a limited understanding of several issues, including experiment design, stimulus type, training, calibration and the examined features. The main aim of the research in this thesis is to advance automatic recognition of imagined speech using EEG signals by addressing a variety of issues that have not been solved in previous studies. These include (1)improving the discrimination between imagined speech versus non-speech tasks, (2) examining temporal parameters to optimise the recognition of imagined words and (3) providing a new feature extraction framework for improving EEG-based imagined speech recognition by considering temporal information after reducing within-session temporal non-stationarities. For the discrimination of speech versus non-speech, EEG data was collected during the imagination of randomly presented and semantically varying words. The non-speech tasks involved attention to visual stimuli and resting. Time-domain and spatio-spectral features were examined in different time intervals. Above-chance-level classification accuracies were achieved for each word and for groups of words compared to the non-speech tasks. To classify imagined words, EEG data related to the imagination of five words was collected. In addition to words classification, the impacts of experimental parameters on classification accuracy were examined. The optimization of these parameters is important to improve the rate and speed of recognizing unspoken speech in on-line applications. These parameters included using different training sizes, classification algorithms, feature extraction in different time intervals and the use of imagination time length as classification feature. Our extensive results showed that Random Forest classifier with features extracted using Discrete Wavelet Transform from 4 seconds fixed time frame EEG yielded that highest average classification of 87.93% in classification of five imagined words. To minimise within class temporal variations, a novel feature extraction framework based on dynamic time warping (DTW) was developed. Using linear discriminant analysis as the classifier, the proposed framework yielded an average 72.02% accuracy in the classification of imagined speech versus silence and 52.5% accuracy in the classification of five words. These results significantly outperformed a baseline configuration of state-of-the art time-domain features

    Detecting Command-Driven Brain Activity in Patients with Disorders of Consciousness Using TR-fNIRS

    Get PDF
    Vegetative state (VS) is a disorder of consciousness often referred to as “wakefulness without awareness”. Patients in this condition experience normal sleep-wake cycles, but lack all awareness of themselves and their surroundings. Clinically, assessing consciousness relies on behavioural tests to determine a patient’s ability to follow commands. This subjective approach often leads to a high rate of misdiagnosis (~40%) where patients who retain residual awareness are misdiagnosed as being in a VS. Recently, functional neuroimaging techniques such as functional magnetic resonance imaging (fMRI), has allowed researchers to use command-driven brain activity to infer consciousness. Although promising, the cost and accessibility of fMRI hinder its use for frequent examinations. Functional near-infrared spectroscopy (fNIRS) is an emerging optical technology that is a promising alternative to fMRI. The technology is safe, portable and inexpensive allowing for true bedside assessment of brain function. This thesis focuses on using time-resolved (TR) fNIRS, a variant of fNIRS with enhanced sensitivity to the brain, to detect brain function in healthy controls and patients with disorders of consciousness (DOC). Motor imagery (MI) was used to assess command-driven brain activity since this task has been extensively validated with fMRI. The feasibility of TR-fNIRS to detect MI activity was first assessed on healthy controls and fMRI was used for validation. The results revealed excellent agreement between the two techniques with an overall sensitivity of 93% in comparison to fMRI. Following these promising results, TR-fNIRS was used for rudimentary mental communication by using MI as affirmation to questions. Testing this approach on healthy controls revealed an overall accuracy of 76%. More interestingly, the same approach was used to communicate with a locked-in patient under intensive care. The patient had residual eye movement, which provided a unique opportunity to confirm the fNIRS results. The TR-fNIRS results were in full agreement with the eye responses, demonstrating for the first time the ability of fNIRS to communicate with a patient without prior training. Finally, this approach was used to assess awareness in DOC patients, revealing residual brain function in two patients who had also previously shown significant MI activity with fMRI
    • …
    corecore