2 research outputs found

    Deep Learning Technique for Congenital Heart Disease Detection Using Stacking-Based CNN-LSTM Models from Fetal Echocardiogram: A Pilot Study

    Get PDF
    Congenital heart defects (CHDs) are a leading cause of death in infants under 1 year of age. Prenatal intervention can reduce the risk of postnatal serious CHD patients, but current diagnosis is based on qualitative criteria, which can lead to variability in diagnosis between clinicians. Objectives: To detect morphological and temporal changes in cardiac ultrasound (US) videos of fetuses with hypoplastic left heart syndrome (HLHS) using deep learning models. A small cohort of 9 healthy and 13 HLHS patients were enrolled, and ultrasound videos at three gestational time points were collected. The videos were preprocessed and segmented to cardiac cycle videos, and five different deep learning CNN-LSTM models were trained (MobileNetv2, ResNet18, ResNet50, DenseNet121, and GoogleNet). The top-performing three models were used to develop a novel stacking CNN-LSTM model, which was trained using five-fold cross-validation to classify HLHS and healthy patients. The stacking CNN-LSTM model outperformed other pre-trained CNN-LSTM models with the accuracy, precision, sensitivity, F1 score, and specificity of 90.5%, 92.5%, 92.5%, 92.5%, and 85%, respectively for video-wise classification, and with the accuracy, precision, sensitivity, F1 score, and specificity of 90.5%, 92.5%, 92.5%, 92.5%, and 85%, respectively for subject-wise classification using ultrasound videos. This study demonstrates the potential of using deep learning models to classify CHD prenatal patients using ultrasound videos, which can aid in the objective assessment of the disease in a clinical setting.This study was funded by Qatar National Research Fund (QNRF), National Priorities Research Program (NPRP 10-0123-170222). The open access publication of this article was funded by the Qatar National Library

    Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images

    Get PDF
    The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.This research was funded by Qatar University High Impact grant QUHI-CENG-23/24-216 and student grant QUST-1-CENG-2023-796 and is also supported via funding from Prince Sattam Bin Abdulaziz University project number (PSAU/2023/R/1444). The open-access publication cost is covered by the Qatar National Library
    corecore