8 research outputs found

    EDITH : ECG Biometrics Aided by Deep Learning for Reliable Individual Authentication

    No full text
    In recent years, physiological signal-based authentication has shown great promises, for its inherent robustness against forgery. Electrocardiogram (ECG) signal, being the most widely studied biosignal, has also received the highest level of attention in this regard. It has been proven with numerous studies that by analyzing ECG signals from different persons, it is possible to identify them, with acceptable accuracy. In this work, we present, EDITH, a deep learning-based framework for ECG biometrics authentication system. Moreover, we hypothesize and demonstrate that Siamese architectures can be used over typical distance metrics for improved performance. We have evaluated EDITH using 4 commonly used datasets and outperformed the prior works using a fewer number of beats. EDITH performs competitively using just a single heartbeat (96∼\sim99.75% accuracy) and can be further enhanced by fusing multiple beats (100% accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture manages to reduce the identity verification Equal Error Rate (EER) to 1.29 %. A limited case study of EDITH with real-world experimental data also suggests its potential as a practical authentication systemScopu

    Knowledge distillation from multi-modal to mono-modal segmentation networks

    No full text
    International audienceThe joint use of multiple imaging modalities for medical image segmentation has been widely studied in recent years. The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this paper, we propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student). The proposed method is an adaptation of the generalized distillation framework where the student network is trained on a subset (1 modality) of the teacher's inputs (n modalities). We illustrate the effectiveness of the proposed framework in brain tumor segmentation with the BraTS 2018 dataset. Using different architectures, we show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of seg-mentation accuracy

    Robust biometric system using session invariant multimodal EEG and keystroke dynamics by the ensemble of self-ONNs

    No full text
    Harnessing the inherent anti-spoofing quality from electroencephalogram (EEG) signals has become a potential field of research in recent years. Although several studies have been conducted, still there are some vital challenges present in the deployment of EEG-based biometrics, which is stable and capable of handling the real-world scenario. One of the key challenges is the large signal variability of EEG when recorded on different days or sessions which impedes the performance of biometric systems significantly. To address this issue, a session invariant multimodal Self-organized Operational Neural Network (Self-ONN) based ensemble model combining EEG and keystroke dynamics is proposed in this paper. Our model is tested successfully on a large number of sessions (10 recording days) with many challenging noisy and variable environments for the identification and authentication tasks. In most of the previous studies, training and testing were performed either over a single recording session (same day) only or without ensuring appropriate splitting of the data on multiple recording days. Unlike those studies, in our work, we have rigorously split the data so that train and test sets do not share the data of the same recording day. The proposed multimodal Self-ONN based ensemble model has achieved identification accuracy of 98% in rigorous validation cases and outperformed the equivalent ensemble of deep CNN models. A novel Self-ONN Siamese network has also been proposed to measure the similarity of templates during the authentication task instead of the commonly used simple distance measure techniques. The multimodal Siamese network reduces the Equal Error Rate (EER) to 1.56% in rigorous authentication. The obtained results indicate that the proposed multimodal Self-ONN model can automatically extract session invariant unique non-linear features to identify and authenticate users with high accuracy.Scopu

    Detection and severity classification of COVID-19 in CT images using deep learning

    No full text
    Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography images. An extensive set of experiments were performed using Encoder?Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments for lung region segmentation showed a Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN with DenseNet201 encoder. The proposed system can reliably localize infections of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical, respectively.Scopu

    Development and Validation of an Early Scoring System for Prediction of Disease Severity in COVID-19 Using Complete Blood Count Parameters

    No full text
    The coronavirus disease 2019 (COVID-19) after outbreaking in Wuhan increasingly spread throughout the world. Fast, reliable, and easily accessible clinical assessment of the severity of the disease can help in allocating and prioritizing resources to reduce mortality. The objective of the study was to develop and validate an early scoring tool to stratify the risk of death using readily available complete blood count (CBC) biomarkers. A retrospective study was conducted on twenty-three CBC blood biomarkers for predicting disease mortality for 375 COVID-19 patients admitted to Tongji Hospital, China from January 10 to February 18, 2020. Machine learning based key biomarkers among the CBC parameters as the mortality predictors were identified. A multivariate logistic regression-based nomogram and a scoring system was developed to categorize the patients in three risk groups (low, moderate, and high) for predicting the mortality risk among COVID-19 patients. Lymphocyte count, neutrophils count, age, white blood cell count, monocytes (%), platelet count, red blood cell distribution width parameters collected at hospital admission were selected as important biomarkers for death prediction using random forest feature selection technique. A CBC score was devised for calculating the death probability of the patients and was used to categorize the patients into three sub-risk groups: low (5% and 50%), respectively. The area under the curve (AUC) of the model for the development and internal validation cohort were 0.961 and 0.88, respectively. The proposed model was further validated with an external cohort of 103 patients of Dhaka Medical College, Bangladesh, which exhibits in an AUC of 0.963. The proposed CBC parameter-based prognostic model and the associated web-application, can help the medical doctors to improve the management by early prediction of mortality risk of the COVID-19 patients in the low-resource countries.This work was supported by Qatar National Research Fund (QNRF) under Grant UREP28-144-3-046 and Qatar University Emergency Response Grant (QUERG-CENG-2020-1) through Qatar University. Open Access publication is funded by Qatar National Library (QNL).Scopu

    COVID-19 infection localization and severity grading from chest X-ray images

    No full text
    The immense spread of coronavirus disease 2019 (COVID-19) has left healthcare systems incapable to diagnose and test patients at the required rate. Given the effects of COVID-19 on pulmonary tissues, chest radiographic imaging has become a necessity for screening and monitoring the disease. Numerous studies have proposed Deep Learning approaches for the automatic diagnosis of COVID-19. Although these methods achieved outstanding performance in detection, they have used limited chest X-ray (CXR) repositories for evaluation, usually with a few hundred COVID-19 CXR images only. Thus, such data scarcity prevents reliable evaluation of Deep Learning models with the potential of overfitting. In addition, most studies showed no or limited capability in infection localization and severity grading of COVID-19 pneumonia. In this study, we address this urgent need by proposing a systematic and unified approach for lung segmentation and COVID-19 localization with infection quantification from CXR images. To accomplish this, we have constructed the largest benchmark dataset with 33,920 CXR images, including 11,956 COVID-19 samples, where the annotation of ground-truth lung segmentation masks is performed on CXRs by an elegant human-machine collaborative approach. An extensive set of experiments was performed using the state-of-the-art segmentation networks, U-Net, U-Net++, and Feature Pyramid Networks (FPN). The developed network, after an iterative process, reached a superior performance for lung region segmentation with Intersection over Union (IoU) of 96.11% and Dice Similarity Coefficient (DSC) of 97.99%. Furthermore, COVID-19 infections of various shapes and types were reliably localized with 83.05% IoU and 88.21% DSC. Finally, the proposed approach has achieved an outstanding COVID-19 detection performance with both sensitivity and specificity values above 99%.Scopu

    A semantically flexible feature fusion network for retinal vessel segmentation

    Full text link
    The automatic detection of retinal blood vessels by computer aided techniques plays an important role in the diagnosis of diabetic retinopathy, glaucoma, and macular degeneration. In this paper we present a semantically flexible feature fusion network that employs residual skip connections between adjacent neurons to improve retinal vessel detection. This yields a method that can be trained employing residual learning. To illustrate the utility of our method for retinal blood vessel detection, we show results on two publicly available data sets, i.e. DRIVE and STARE. In our experimental evaluation we include widely used evaluation metrics and compare our results with those yielded by alternatives elsewhere in the literature. In our experiments, our method is quite competitive, delivering a margin of sensitivity and accuracy improvement as compared to the alternatives under consideration
    corecore