30 research outputs found

    Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    No full text
    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets.Applied Science, Faculty ofElectrical and Computer Engineering, Department ofReviewedFacult

    A New Orientation-Adaptive Interpolation Method

    No full text

    Toward Generating More Diagnostic Features from Photoplethysmogram Waveforms

    No full text
    Photoplethysmogram (PPG) signals collected using a pulse oximeter are increasingly being used for screening and diagnosis purposes. Because of the non-invasive, cost-effective, and easy-to-use nature of the pulse oximeter, clinicians and biomedical engineers are investigating how PPG signals can help in the management of many medical conditions, especially for global health application. The study of PPG signal analysis is relatively new compared to research in electrocardiogram signals, for instance; however, we anticipate that in the near future blood pressure, cardiac output, and other clinical parameters will be measured from wearable devices that collect PPG signals, based on the signal’s vast potential. This article attempts to organize and standardize the names of PPG waveforms to ensure consistent terminologies, thereby helping the rapid developments in this research area, decreasing the disconnect within and among different disciplines, and increasing the number of features generated from PPG waveforms.Applied Science, Faculty ofMedicine, Faculty ofOther UBCElectrical and Computer Engineering, Department ofReviewedFacult

    Application of a hybrid wavelet feature selection method in the design of a self-paced brain interface system

    No full text
    Background: Recently, successful applications of the discrete wavelet transform have been reported in brain interface (BI) systems with one or two EEG channels. For a multi-channel BI system, however, the high dimensionality of the generated wavelet features space poses a challenging problem. Methods: In this paper, a feature selection method that effectively reduces the dimensionality of the feature space of a multi-channel, self-paced BI system is proposed. The proposed method uses a two-stage feature selection scheme to select the most suitable movement-related potential features from the feature space. The first stage employs mutual information to filter out the least discriminant features, resulting in a reduced feature space. Then a genetic algorithm is applied to the reduced feature space to further reduce its dimensionality and select the best set of features. Results: An offline analysis of the EEG signals (18 bipolar EEG channels) of four able-bodied subjects showed that the proposed method acquires low false positive rates at a reasonably high true positive rate. The results also show that features selected from different channels varied considerably from one subject to another. Conclusion: The proposed hybrid method effectively reduces the high dimensionality of the feature space. The variability in features among subjects indicates that a user-customized BI system needs to be developed for individual users.Applied Science, Faculty ofElectrical and Computer Engineering, Department ofNon UBCReviewedFacult

    Multi-Channel Vision Transformer for Epileptic Seizure Prediction

    No full text
    Epilepsy is a neurological disorder that causes recurrent seizures and sometimes loss of awareness. Around 30% of epileptic patients continue to have seizures despite taking anti-seizure medication. The ability to predict the future occurrence of seizures would enable the patients to take precautions against probable injuries and administer timely treatment to abort or control impending seizures. In this study, we introduce a Transformer-based approach called Multi-channel Vision Transformer (MViT) for automated and simultaneous learning of the spatio-temporal-spectral features in multi-channel EEG data. Continuous wavelet transform, a simple yet efficient pre-processing approach, is first used for turning the time-series EEG signals into image-like time-frequency representations named Scalograms. Each scalogram is split into a sequence of fixed-size non-overlapping patches, which are then fed as inputs to the MViT for EEG classification. Extensive experiments on three benchmark EEG datasets demonstrate the superiority of the proposed MViT algorithm over the state-of-the-art seizure prediction methods, achieving an average prediction sensitivity of 99.80% for surface EEG and 90.28–91.15% for invasive EEG data.Applied Science, Faculty ofNon UBCElectrical and Computer Engineering, Department ofReviewedFacult

    A sinogram denoising algorithm for low-dose computed tomography

    Get PDF
    Background: From the viewpoint of the patients’ health, reducing the radiation dose in computed tomography (CT) is highly desirable. However, projection measurements acquired under low-dose conditions will contain much noise. Therefore, reconstruction of high-quality images from low-dose scans requires effective denoising of the projection measurements. Methods We propose a denoising algorithm that is based on maximizing the data likelihood and sparsity in the gradient domain. For Poisson noise, this formulation automatically leads to a locally adaptive denoising scheme. Because the resulting optimization problem is hard to solve and may also lead to artifacts, we suggest an explicitly local denoising method by adapting an existing algorithm for normally-distributed noise. We apply the proposed method on sets of simulated and real cone-beam projections and compare its performance with two other algorithms. Results The proposed algorithm effectively suppresses the noise in simulated and real CT projections. Denoising of the projections with the proposed algorithm leads to a substantial improvement of the reconstructed image in terms of noise level, spatial resolution, and visual quality. Conclusion The proposed algorithm can suppress very strong quantum noise in CT projections. Therefore, it can be used as an effective tool in low-dose CT.Other UBCReviewedFacult

    Hypertension Assessment via ECG and PPG Signals: An Evaluation Using MIMIC Database

    No full text
    Cardiovascular diseases (CVDs) have become the biggest threat to human health, and they are accelerated by hypertension. The best way to avoid the many complications of CVDs is to manage and prevent hypertension at an early stage. However, there are no symptoms at all for most types of hypertension, especially for prehypertension. The awareness and control rates of hypertension are extremely low. In this study, a novel hypertension management method based on arterial wave propagation theory and photoplethysmography (PPG) morphological theory was researched to explore the physiological changes in different blood pressure (BP) levels. Pulse Arrival Time (PAT) and photoplethysmogram (PPG) features were extracted from electrocardiogram (ECG) and PPG signals to represent the arterial wave propagation theory and PPG morphological theory, respectively. Three feature sets, one containing PAT only, one containing PPG features only, and one containing both PAT and PPG features, were used to classify the different BP categories, defined as normotension, prehypertension, and hypertension. PPG features were shown to classify BP categories more accurately than PAT. Furthermore, PAT and PPG combined features improved the BP classification performance. The F1 scores to classify normotension versus prehypertension reached 84.34%, the scores for normotension versus hypertension reached 94.84%, and the scores for normotension plus prehypertension versus hypertension reached 88.49%. This indicates that the simultaneous collection of ECG and PPG signals could detect hypertension.Applied Science, Faculty ofMedicine, Faculty ofNon UBCElectrical and Computer Engineering, Department ofObstetrics and Gynaecology, Department ofReviewedFacult

    Photoplethysmography and Deep Learning: Enhancing Hypertension Risk Stratification

    No full text
    Blood pressure is a basic physiological parameter in the cardiovascular circulatory system. Long-term abnormal blood pressure will lead to various cardiovascular diseases, making the early detection and assessment of hypertension profoundly significant for the prevention and treatment of cardiovascular diseases. In this paper, we investigate whether or not deep learning can provide better results for hypertension risk stratification when compared to the classical signal processing and feature extraction methods. We tested a deep learning method for the classification and evaluation of hypertension using photoplethysmography (PPG) signals based on the continuous wavelet transform (using Morse) and pretrained convolutional neural network (using GoogLeNet). We collected 121 data recordings from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) Database, each containing arterial blood pressure (ABP) and photoplethysmography (PPG) signals. The ABP signals were utilized to extract blood pressure category labels, and the PPG signals were used to train and test the model. According to the seventh report of the Joint National Committee, blood pressure levels are categorized as normotension (NT), prehypertension (PHT), and hypertension (HT). For the early diagnosis and assessment of HT, the timely detection of PHT and the accurate diagnosis of HT are significant. Therefore, three HT classification trials were set: NT vs. PHT, NT vs. HT, and (NT + PHT) vs. HT. The F-scores of these three classification trials were 80.52%, 92.55%, and 82.95%, respectively. The tested deep method achieved higher accuracy for hypertension risk stratification when compared to the classical signal processing and feature extraction method. Additionally, the method achieved comparable results to another approach that requires electrocardiogram and PPG signals.Applied Science, Faculty ofMedicine, Faculty ofOther UBCNon UBCElectrical and Computer Engineering, Department ofReviewedFacult
    corecore