3 research outputs found

    Multimodal spatio-temporal-spectral fusion for deep learning applications in physiological time series processing:a case study in monitoring the depth of anesthesia

    No full text
    Abstract Physiological signals processing brings challenges including dimensionality (due to the number of channels), heterogeneity (due to the different range of values) and multimodality (due to the different sources). In this regard, the current study intended, first, to use time-frequency ridge mapping in exploring the use of fused information from joint EEG-ECG recordings in tracking the transition between different states of anesthesia. Second, it investigated the effectiveness of pre-trained state-of-the-art deep learning architectures for learning discriminative features in the fused data in order to classify the states during anesthesia. Experimental data from healthy-brain patients undergoing operation (N = 20) were used for this study. Data was recorded from the BrainStatus device with single ECG and 10 EEG channels. The obtained results support the hypothesis that not only can ridge fusion capture temporal-spectral progression patterns across all modalities and channels, but also this simplified interpretation of time-frequency representation accelerates the training process and yet improves significantly the efficiency of deep models. Classification outcomes demonstrates that this fusion could yields a better performance, in terms of 94.14% precision and 0.28 s prediction time, compared to commonly used data-level fusing methods. To conclude, the proposed fusion technique provides the possibility of embedding time-frequency information as well as spatial dependencies over modalities and channels in just a 2D array. This integration technique shows significant benefit in obtaining a more unified and global view of different aspects of physiological data at hand, and yet maintaining the desired performance level in decision making

    Reconstruction of missing channel in electroencephalogram using spatiotemporal correlation-based averaging

    No full text
    Abstract Objective: Electroencephalogram (EEG) recordings often contain large segments with missing signals due to poor electrode contact or other artifact contamination. Recovering missing values, contaminated segments and lost channels could be highly beneficial, especially for automatic classification algorithms, such as machine/deep learning models, whose performance relies heavily on high-quality data. The current study proposes a new method for recovering missing segments in EEG. Approach: In the proposed method, the reconstructed segment is estimated by substitution of the missing part of the signal with the normalized weighted sum of other channels. The weighting process is based on inter-channel correlation of the non-missing preceding and proceeding temporal windows. The algorithm was designed to be computationally efficient. Experimental data from patients (N = 20) undergoing general anesthesia due to elective surgery were used for the validation of the algorithm. The data were recorded using a portable EEG device with ten channels and a self-adhesive frontal electrode during induction of anesthesia with propofol from waking state until burst suppression level, containing lots of variation in both amplitude and frequency properties. The proposed imputation technique was compared with another simple-structure technique. Distance correlation (DC) was used as a measure of comparison evaluation. Main results: The proposed method, with an average DC of 82.48 ± 10.01 (µ ± σ)%, outperformed its competitor with an average DC of 67.89 ± 14.12 (µ ± σ)%. This algorithm also showed a better performance when increasing the number of missing channels. Significance: the proposed technique provides an easy-to-implement and computationally efficient approach for the reliable reconstruction of missing or contaminated EEG segments

    Morphology-preserving reconstruction of times series with missing data for enhancing deep learning-based classification

    No full text
    Abstract There is a growing concern among deep learning-based decoding methods used for biomedical time series. In small dataset particularly those that rely mainly on subject-specific analyses, these decoding techniques correspond too closely to set of data and may consequently unable to generalize well on future observations. Considering this overfitting issue, expanding the datasets without introducing extra noise or losing important information is highly demanded. In so doing, this work invokes a novel idea of using delay-embedding-based nonlinear principal component analysis (DE-NLPCA) to generate synthetic time series. This idea was inspired by extracting topological representation of input space by unsupervised learning which can benefits augmentation of biomedical time series, tending to be high dimensional and morphologically complex. Different types of time series with different temporal complexity were used for evaluation. One of them was an open dataset associated with the activities of daily living, being collected from 10 healthy participants performing 186 ADL-related instances of activity while wearing 9-axis Inertial Measurement Units. Another dataset was an experimental data from healthy-brain patients undergoing operation (N = 20), being recorded from the BrainStatus device with 10 EEG channels. Considering leave-one-subject-out cross-validation, increase of up to 14.72% in classification performance (in terms of accuracy) was observed across anesthesia dataset when DE-NLPCA-based augmented data was introduced during training. It was also found that classification performance was more improved when DE-NLPCA-based technique were introduced compared to augmentation using conditional generative adversarial network (CGAN). This DE-NLPCA-based approach was also shown to be able to recover time–frequency characteristics of contaminated signals
    corecore