21 research outputs found

    Electrochemical Biosensing and Deep Learning-Based Approaches in the Diagnosis of COVID-19: A Review

    Get PDF
    COVID-19 caused by the transmission of SARS-CoV-2 virus taking a huge toll on global health and caused life-threatening medical complications and elevated mortality rates, especially among older adults and people with existing morbidity. Current evidence suggests that the virus spreads primarily through respiratory droplets emitted by infected persons when breathing, coughing, sneezing, or speaking. These droplets can reach another person through their mouth, nose, or eyes, resulting in infection. The gold standard\u27\u27 for clinical diagnosis of SARS-CoV-2 is the laboratory-based nucleic acid amplification test, which includes the reverse transcription-polymerase chain reaction (RT-PCR) test on nasopharyngeal swab samples. The main concerns with this type of test are the relatively high cost, long processing time, and considerable false-positive or false-negative results. Alternative approaches have been suggested to detect the SARS-CoV-2 virus so that those infected and the people they have been in contact with can be quickly isolated to break the transmission chains and hopefully, control the pandemic. These alternative approaches include electrochemical biosensing and deep learning. In this review, we discuss the current state-of-the-art technology used in both fields for public health surveillance of SARS-CoV-2 and present a comparison of both methods in terms of cost, sampling, timing, accuracy, instrument complexity, global accessibility, feasibility, and adaptability to mutations. Finally, we discuss the issues and potential future research approaches for detecting the SARS-CoV-2 virus utilizing electrochemical biosensing and deep learning

    An efficient compression of ECG signals using deep convolutional autoencoders

    No full text
    Background and objective: Advances in information technology have facilitated the retrieval and processing of biomedical data. Especially with wearable technologies and mobile platforms, we are able to follow our healthcare data, such as electrocardiograms (ECG), in real time. However, the hardware resources of these technologies are limited. For this reason, the optimal storage and safe transmission of the personal health data is critical. This study proposes a new deep convolutional autoencoder (CAE) model for compressing ECG signals. Methods: In this paper, a deep network structure of 27 layers consisting of encoder and decoder parts is designed. In the encoder section of this model, the signals are reduced to low-dimensional vectors; and in the decoder section, the signals are reconstructed. The deep learning approach provides the representations of the low and high levels of signals in the hidden layers of the model. Hence, the original signal can be reconstructed with minimal loss. Very different from traditional linear transformation methods, a deep compression approach implies that it can learn to use different ECG records automatically. Results: The performance was evaluated on an experimental data set comprising 4800 ECG fragments from 48 unique clinical patients. The compression rate (CR) of the proposed model was 32.25, and the average PRD value was 2.73%. These favourable observation suggest that our deep model can allow secure data transfer in a low-dimensional form to remote medical centers. We present an effective compression approach that can potentially be used in wearable devices, e-health applications, telemetry and Holter systems

    A Deep Learning Model for Automated Sleep Stages Classification Using PSG Signals

    Get PDF
    Sleep disorder is a symptom of many neurological diseases that may significantly affect the quality of daily life. Traditional methods are time-consuming and involve the manual scoring of polysomnogram (PSG) signals obtained in a laboratory environment. However, the automated monitoring of sleep stages can help detect neurological disorders accurately as well. In this study, a flexible deep learning model is proposed using raw PSG signals. A one-dimensional convolutional neural network (1D-CNN) is developed using electroencephalogram (EEG) and electrooculogram (EOG) signals for the classification of sleep stages. The performance of the system is evaluated using two public databases (sleep-edf and sleep-edfx). The developed model yielded the highest accuracies of 98.06%, 94.64%, 92.36%, 91.22%, and 91.00% for two to six sleep classes, respectively, using the sleep-edf database. Further, the proposed model obtained the highest accuracies of 97.62%, 94.34%, 92.33%, 90.98%, and 89.54%, respectively for the same two to six sleep classes using the sleep-edfx dataset. The developed deep learning model is ready for clinical usage, and can be tested with big PSG data

    A new measure for community structures through indirect social connections

    Get PDF
    The brain disorders may cause loss of some critical functions such as thinking, speech, and movement. So, the early detection of brain diseases may help to get the timely best treatment. One of the conventional methods used to diagnose these disorders is the magnetic resonance imaging (MRI) technique. Manual diagnosis of brain abnormalities is time-consuming and difficult to perceive the minute changes in the MRI images, especially in the early stages of abnormalities. Proper selection of the features and classifiers to obtain the highest performance is a challenging task. Hence, deep learning models have been widely used for medical image analysis over the past few years. In this study, we have employed the AlexNet, Vgg-16, ResNet-18, ResNet-34, and ResNet-50 pre-trained models to automatically classify MR images in to normal, cerebrovascular, neoplastic, degenerative, and inflammatory diseases classes. We have also compared their classification performance with pre-trained models, which are the state-of-art architectures. We have obtained the best classification accuracy of 95.23% ± 0.6 with the ResNet-50 model among the five pre-trained models. Our model is ready to be tested with huge MRI images of brain abnormalities. The outcome of the model will also help the clinicians to validate their findings after manual reading of the MRI images

    An Automated Wavelet-Based Sleep Scoring Model Using EEG, EMG, and EOG Signals with More Than 8000 Subjects

    No full text
    Human life necessitates high-quality sleep. However, humans suffer from a lower quality of life because of sleep disorders. The identification of sleep stages is necessary to predict the quality of sleep. Manual sleep-stage scoring is frequently conducted through sleep experts’ visually evaluations of a patient’s neurophysiological data, gathered in sleep laboratories. Manually scoring sleep is a tough, time-intensive, tiresome, and highly subjective activity. Hence, the need of creating automatic sleep-stage classification has risen due to the limitations imposed by manual sleep-stage scoring methods. In this study, a novel machine learning model is developed using dual-channel unipolar electroencephalogram (EEG), chin electromyogram (EMG), and dual-channel electrooculgram (EOG) signals. Using an optimum orthogonal filter bank, sub-bands are obtained by decomposing 30 s epochs of signals. Tsallis entropies are then calculated from the coefficients of these sub-bands. Then, these features are fed an ensemble bagged tree (EBT) classifier for automated sleep classification. We developed our automated sleep classification model using the Sleep Heart Health Study (SHHS) database, which contains two parts, SHHS-1 and SHHS-2, containing more than 8455 subjects with more than 75,000 h of recordings. The proposed model separated three classes if sleep: rapid eye movement (REM), non-REM, and wake, with a classification accuracy of 90.70% and 91.80% using the SHHS-1 and SHHS-2 datasets, respectively. For the five-class problem, the model produces a classification accuracy of 84.3% and 86.3%, corresponding to the SHHS-1 and SHHS-2 databases, respectively, to classify wake, N1, N2, N3, and REM sleep stages. The model acquired Cohen’s kappa (κ) coefficients as 0.838 with SHHS-1 and 0.86 with SHHS-2 for the three-class classification problem. Similarly, the model achieved Cohen’s κ of 0.7746 for SHHS-1 and 0.8007 for SHHS-2 in five-class classification tasks. The model proposed in this study has achieved better performance than the best existing methods. Moreover, the model that has been proposed has been developed to classify sleep stages for both good sleepers as well as patients suffering from sleep disorders. Thus, the proposed wavelet Tsallis entropy-based model is robust and accurate and may help clinicians to comprehend and interpret sleep stages efficiently
    corecore