17 research outputs found

    Evaluation of Early Esophageal Adenocarcinoma Detection Using Deep Learning

    No full text
    Esophageal Adenocarcinoma (EAC) is considered as the early stage of esophageal cancer developed mainly from the pre-malignant changes in esophagus lining named Barrett’s Esophagus (BE). Throughout the gastroin- testinal tract examination, premalignant and early cancer stages in the esoph- agus are usually overlooked as they are considered challenging to detect and requires a significant experience. Computer Aided Detection (CAD) systems, therefore, could be helpful in automatically detecting early cancerous lesions. With the recent advances in deep learning, the performance of object detec- tion methods has been increased to a great extent. In this paper, we aim to evaluate the performance of different state-of-the-art deep learning detection methods (RCNN, Fast, Rcnn, Faster RCNN, SSD) to automatically allocate BE abnormalities. To achieve that, a dataset of High-Definition white light en- doscopy images from 39 patients with corresponding manually annotated by five experienced clinicians has been evaluated. Experimental results show that Single Shor Multibox Detector (SSD) , outperforms other methods in terms of the evaluation measures</p

    Early esophageal adenocarcinoma detection using deep learning methods

    No full text
    Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images.Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested.Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83.Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome.</p

    GFD Faster R-CNN: Gabor Fractal DenseNet Faster R-CNN for automatic detection of esophageal abnormalities in endoscopic images

    No full text
    Esophageal cancer is ranked as the sixth most fatal cancer type. Most esophageal cancers are believed to arise from overlooked abnormalities in the esophagus tube. The early detection of these abnormalities is considered challenging due to their different appearance and random location throughout the esophagus tube. In this paper, a novel Gabor Fractal DenseNet Faster R-CNN (GFD Faster R-CNN) is proposed which is a two-input network adapted from the Faster R-CNN to address the challenges of esophageal abnormality detection. First, a Gabor Fractal (GF) image is generated using various Gabor filter responses considering different orientations and scales, obtained from the original endoscopic image that strengthens the fractal texture information within the image. Secondly, we incorporate Densely Connected Convolutional Network (DenseNet) as the backbone network to extract features from both original endoscopic image and the generated GF image separately; the DenseNet provides a reduction in the trained parameters while supporting the network accuracy and enables a maximum flow of information. Features extracted from the GF and endoscopic images are fused through bilinear fusion before ROI pooling stage in Faster R-CNN, providing a rich feature representation that boosts the performance of final detection. The proposed architecture was trained and tested on two different datasets independently: Kvasir (1000 images) and MICCAI’15 (100 images). Extensive experiments have been carried out to evaluate the performance of the model, with a recall of 0.927 and precision of 0.942 for Kvasir dataset, and a recall of 0.97 and precision of 0.92 for MICCAI’15 dataset, demonstrating a high detection performance compared to the state-of-the-art.</p

    ResDUnet: Residual Dilated UNet for Left Ventricle Segmentation from Echocardiographic Images

    No full text
    Echocardiography is the modality of choice for the assessment of left ventricle function. Left ventricle is responsible for pumping blood rich in oxygen to all body parts. Segmentation of this chamber from echocardiographic images is a challenging task, due to the ambiguous boundary and inhomogeneous intensity distribution. In this paper we propose a novel deep learning model named ResDUnet. The model is based on U-net incorporated with dilated convolution, where residual blocks are employed instead of the basic U-net units to ease the training process. Each block is enriched with squeeze and excitation unit for channel-wise attention and adaptive feature re-calibration. To tackle the problem of left ventricle shape and size variability, we chose to enrich the process of feature concatenation in U-net by integrating feature maps generated by cascaded dilation. Cascaded dilation broadens the receptive field size in comparison with traditional convolution, which allows the generation of multi-scale information which in turn results in a more robust segmentation. Performance measures were evaluated on a publicly available dataset of 500 patients with large variability in terms of quality and patients pathology. The proposed model shows a dice similarity increase of 8.4% when compared to deeplabv3 and 1.2% when compared to the basic U-net architecture. Experimental results demonstrate the potential use in clinical domain.</p

    The development of automated methods for the reproducible assessment of aortic stenosis

    No full text
    Background: Aside from symptoms, the principal determinant of surgical timing in aortic stenosis is echocardiography. Our previous work found that the dominant sources of variation were not differences between operators or readers, but, differences in probe placement, beat-to-beat variability, and tracing. This knowledge allowed the development of a novel automated algorithm, the “instantaneous Dimensionless Index (iDI)” that reduces the influence of these factors to improve precision.Methods: Vendor-independent, open-source software for processing Doppler tracings both in “real-time” and retrospectively was developed (Figure 1). Runs of multiple beats are acquired using continuous wave (A). Automatic thresholding, quality control (B), tracing and extraction are performed before overlaying and combining the multiple-beats (C). From this single ensemble beat (D) the outer aortic and inner LVOT tracing are simultaneously extracted (E) to produce the iDI.Thirty patients with aortic stenosis were prospectively recruited. We obtained multiple Doppler recordings using both the traditional peak velocity dimensionless index (DI) and our new method in each patient.Results: The mean coefficient of variation was substantially smaller with iDI as compared to DI (5.5% vs 12.1%, p<0.001). The mean DI was however lower than the iDI (-0.04, p=0.01), often due to underestimation of LVOT and over-estimation of the aortic velocity using the traditional technique.Conclusions: The reduction or elimination of beat-to-beat, tracing, probe-placement, and reader variability lead to a halving of the coefficient of variation. This doubling of precision permits trials using valve severity as an endpoint to require four times fewer patients. In routine clinical clinicians would have greater confidence in measurements and patients would require fewer visits.</p

    Automated Segmentation of Left Ventricle in 2D echocardiography using deep learning

    No full text
    Following the successful application of the U-Net to medical images, there have been different encoder-decoder models proposed as an improvement to the original U-Net for segmenting echocardiographic images. This study aims to examine the performance of the state-of-the-art proposed models claimed to have better accuracies, as well as the original U-Net model by applying them to an independent dataset of patients to segment the endocardium of the Left Ventricle in 2D automatically. The prediction outputs of the models are used to evaluate the performance of the models by comparing the automated results against the expert annotations (gold standard). Our results reveal that the original U-Net model outperforms other models by achieving an average Dice coefficient of 0.92±0.05, and Hausdorff distance of 3.97±0.82.</p

    Automated Segmentation of Left Ventricle in 2D echocardiography using deep learning

    No full text
    Following the successful application of the U-Net to medical images, there have been different encoder-decoder models proposed as an improvement to the original U-Net for segmenting echocardiographic images. This study aims to examine the performance of the state-of-the-art proposed models claimed to have better accuracies, as well as the original U-Net model by applying them to an independent dataset of patients to segment the endocardium of the Left Ventricle in 2D automatically. The prediction outputs of the models are used to evaluate the performance of the models by comparing the automated results against the expert annotations (gold standard). Our results reveal that the original U-Net model outperforms other models by achieving an average Dice coefficient of 0.92±0.05, and Hausdorff distance of 3.97±0.82.</p

    Segmentation of Left Ventricle in 2D Echocardiography Using Deep Learning

    No full text
    The segmentation of Left Ventricle (LV) is currently carried out manually by the experts, and the automation of this process has proved challenging due to the presence of speckle noise and the inherently poor quality of the ultrasound images. This study aims to evaluate the performance of different state-of-the-art Convolutional Neural Network (CNN) segmentation models to segment the LV endocardium in echocardiography images automatically. Those adopted methods include U-Net, SegNet, and fully convolutional DenseNets (FC-DenseNet). The prediction outputs of the models are used to assess the performance of the CNN models by comparing the automated results against the expert annotations (as the gold standard). Results reveal that the U-Net model outperforms other models by achieving an average Dice coefficient of 0.93?±?0.04, and Hausdorff distance of 4.52?±?0.90.</p

    Doppler assessment of aortic stenosis: reading the peak velocity is superior to velocity time integral

    No full text
    Introduction Previous studies of the reproducibility of echocardiographic assessment of aortic stenosis have compared only a pair of observers. The aim of this study was to assess reproducibility across a large group of observers and compare the reproducibility of reading the peak versus the velocity time integral.Methods 25 observers reviewed continuous wave (CW) aortic valve and pulsed wave (PW) LVOT Doppler traces from 20 sequential cases of aortic stenosis in random order. Each operator unknowingly measured the peak velocity and velocity time integral (VTI) twice for each case, with the traces stored for analysis. We undertook a mixed-model analysis of the sources of variance for peak and VTI measurements.Results Measuring the peak is more reproducible than VTI for both PW (coefficient of variation 9.6% versus 15.9%, p<0.001) and CW traces (coefficient of variation 4.0% versus 9.6%, p<0.001), as shown in Figure 1. VTI is inferior because, compared to the middle, it is difficult to reproducibly trace the steep beginning (standard deviation 3.7x and 1.8x larger for CW and PW respectively) and end (standard deviation 2.4x and 1.5x larger for CW and PW respectively). Dimensionless index reduces the coefficient of variation (19% reduction for VTI, 11% reduction for peak) partly because it cancels correlated errors: an operator who over-measures a CW trace is likely to over-measure the matching PW trace (r=0.39, p<0.001?for VTI, r=0.41, p<0.001?for peak), as shown in Figure 2.Conclusions It is more reproducible to measure the peak of a Doppler trace than the VTI, because it is difficult to trace the steep slopes at the beginning and end reproducibly. The difference is non-trivial: an average operator would be 95% confident detecting a 11.1% change in peak velocity but a much larger 27.4% change in VTI. A clinical trial of an intervention for aortic stenosis with a VTI endpoint would need to be 2.4 times larger than one with a peak velocity endpoint. Part of the benefit of dimensionless index in improving reproducibility arises because it cancels individual operators tendency to consistently over- or under-read traces.</p

    Open-source, vendor-independent, automated multi-beat tissue Doppler echocardiography analysis

    No full text
    Current guidelines for measuring cardiac function by tissue Doppler recommend using multiple beats, but this has a time cost for human operators. We present an open-source, vendor-independent, drag-and-drop software capable of automating the measurement process. A database of ~8000 tissue Doppler beats (48 patients) from the septal and lateral annuli were analyzed by three expert echocardiographers. We developed an intensity- and gradient-based automated algorithm to measure tissue Doppler velocities. We tested its performance against manual measurements from the expert human operators. Our algorithm showed strong agreement with expert human operators. Performance was indistinguishable from a human operator: for algorithm, mean difference and SDD from the mean of human operators’ estimates 0.48?±?1.12 cm/s (R2?=?0.82); for the humans individually this was 0.43?±?1.11 cm/s (R2?=?0.84), ?0.88?±?1.12 cm/s (R2?=?0.84) and 0.41?±?1.30 cm/s (R2?=?0.78). Agreement between operators and the automated algorithm was preserved when measuring at either the edge or middle of the trace. The algorithm was 10-fold quicker than manual measurements (p?</p
    corecore