40 research outputs found

    Near real-time early cancer detection using a graphics processing unit

    Get PDF
    Automatically detecting early cancer using medical images is challenging, yet very crucial to help save millions of lives in the early stages of cancer. In this work, we improved a method that was originally developed by Yamaguchi et al. from the Saga University in Saga Japan. The original method would first decompose the endoscopic image into four color elements: red, green, blue and luminance (RGBL). Next each component is again decomposed to non-overlapping blocks of smaller images. Each smaller image undergoes two phases of DWT(s) and finally the Fractal Dimension (FD) is calculated per smaller image and abnormal regions are detectable. Our proposed method not only used GPU technology to speed up processing, this method also applied edge enhancement via Gaussian Fuzzy Edge Enhancement. After edge enhancement, multiple thresholds (or tuning variables) were identified and adjusted to reduce computational requirements, decrease false positives and increase the accuracy of detecting early cancer. Most lesions where a physician had manually indicated that could be an area of concern were detected quickly, less than four seconds, which is roughly 25x quicker than the existing work. The false positive rate was reduced but still needs improvement. In the future, a Support Vector Machine (SVM) would be an ideal solutions to reduce the false positive rate while also aiding in increasing detection and SVM technology has been implemented on the GPU. Once a technology, like a SVM, is implemented with better results, video processing will be the nearing the final step to \u27Near Real Time Automatic Detection of Early Esophageal Cancer from an Endoscopic Image\u27 --Leaf iv

    Esophageal Abnormality Detection Using DenseNet Based Faster R-CNN With Gabor Features

    Get PDF
    Early detection of esophageal abnormalities can help in preventing the progression of the disease into later stages. During esophagus examination, abnormalities are often overlooked due to the irregular shape, variable size, and the complex surrounding area which requires a significant effort and experience. In this paper, a novel deep learning model which is based on faster region-based convolutional neural network (Faster R-CNN) is presented to automatically detect abnormalities in the esophagus from endoscopic images. The proposed detection system is based on a combination of Gabor handcrafted features with the CNN features. The densely connected convolutional networks (DenseNets) architecture is embraced to extract the CNN features providing a strengthened feature propagation between the layers and alleviate the vanishing gradient problem. To address the challenges of detecting abnormal complex regions, we propose fusing extracted Gabor features with the CNN features through concatenation to enhance texture details in the detection stage. Our newly designed architecture is validated on two datasets (Kvasir and MICCAI 2015). Regarding the Kvasir, the results show an outstanding performance with a recall of 90.2% and a precision of 92.1% with a mean of average precision (mAP) of 75.9%. While for the MICCAI 2015 dataset, the model is able to surpass the state-of-the-art performance with 95% recall and 91% precision with mAP value of 84%. The experimental results demonstrate that the system is able to detect abnormalities in endoscopic images with good performance without any human intervention

    Automatic Esophageal Abnormality Detection and Classification

    Get PDF
    Esophageal cancer is counted as one of the deadliest cancers worldwide ranking the sixth among all types of cancers. Early esophageal cancer typically causes no symp- toms and mainly arises from overlooked/untreated premalignant abnormalities in the esophagus tube. Endoscopy is the main tool used for the detection of abnormalities, and the cell deformation stage is confirmed by taking biopsy samples. The process of detection and classification is considered challenging for several reasons such as; different types of abnormalities (including early cancer stages) can be located ran- domly throughout the esophagus tube, abnormal regions can have various sizes and appearances which makes it difficult to capture, and failure in discriminating between the columnar mucosa from the metaplastic epithelium. Although many studies have been conducted, it remains a challenging task and improving the accuracy of auto- matically classifying and detecting different esophageal abnormalities is an ongoing field. This thesis aims to develop novel automated methods for the detection and classification of the abnormal esophageal regions (precancerous and cancerous) from endoscopic images and videos. In this thesis, firstly, the abnormality stage of the esophageal cell deformation is clas- sified from confocal laser endomicroscopy (CLE) images. The CLE is an endoscopic tool that provides a digital pathology view of the esophagus cells. The classifica- tion is achieved by enhancing the internal features of the CLE image, using a novel enhancement filter that utilizes fractional integration and differentiation. Different imaging features including, Multi-Scale pyramid rotation LBP (MP-RLBP), gray level co-occurrence matrices (GLCM), fractal analysis, fuzzy LBP and maximally stable extremal regions (MSER), are calculated from the enhanced image to assure a robust classification result. The support vector machine (SVM) and random forest (RF) classifiers are employed to classify each image into its pathology stage. Secondly, we propose an automatic detection method to locate abnormality regions from high definition white light (HD-WLE) endoscopic images. We first investigate the performance of different deep learning detection methods on our dataset. Then we propose an approach that combines hand-designed Gabor features with extracted convolutional neural network features that are used by the Faster R-CNN to detect abnormal regions. Moreover, to further improve the detection performance, we pro- pose a novel two-input network named GFD-Faster RCNN. The proposed method generates a Gabor fractal image from the original endoscopic image using Gabor filters. Then features are learned separately from the endoscopic image and the gen- erated Gabor fractal image using the densely connected convolutional network to detect abnormal esophageal regions. Thirdly, we present a novel model to detect the abnormal regions from endoscopic videos. We design a 3D Sequential DenseConvLstm network to extract spatiotem- poral features from the input videos that are utilized by a region proposal network and ROI pooling layer to detect abnormality regions in each frame throughout the video. Additionally, we suggest an FS-CRF post-processing method that incorpor- ates the Conditional Random Field (CRF) on a frame-based level to recover missed abnormal regions in neighborhood frames within the same clip. The methods are evaluated on four datasets: (1) CLE dataset used for the classific- ation model, (2) Publicly available dataset named Kvasir, (3) MICCAI’15 Endovis challenge dataset, Both datasets (2) and (3) are used for the evaluation of detection model from endoscopic images. Finally, (4) Gastrointestinal Atlas dataset used for the evaluation of the video detection model. The experimental results demonstrate promising results of the different models and have outperformed the state-of-the-art methods

    Early esophageal adenocarcinoma detection using deep learning methods

    Get PDF
    Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images. Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested. Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83. Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome

    Guest Editorial : Special issue on advanced computing for image-guided intervention

    Get PDF
    Editorial Guest Editorial: Special issue on advanced computing for image-guided intervention In the past years, we have witnessed a growing number of applications of minimally invasive or non-invasive interventions in clinical practice, where imaging is playing an essential role for the success of both diagnosis and therapy. Particularly, advanced signal and image processing algorithms are receiving increasing attention, which aim to provide accurate and reliable information directly to physicians. We have seen the applications of these technologies during all stages of an intervention, including preoperational planning, intra-operational guidance and post-operational verification. For this special issue, we have received a significant number of submissions from both academia and industry, out of which we have carefully selected eleven articles with outstanding quality. These articles have covered the topics of anatomic structure identification and tracking, image registration, data visualization and newly emerging applications. In [1] have addressed the image registration problem between preand post-radiated MRI to facilitate the evaluation of the therapeutic response after External Beam Radiation Treatment (EBRT) for the prostate cancer. A different approach has been employed by We have also included three papers on ultrasound-guided image interventions. In We have included in this special issue two papers on tissue characterization from endoscopic images. Nawarathna et al. have proposed in With the increasing use of various imaging modalities in image-guided intervention and therapy, how to optimally present and visualize the data becomes also an important issue. In [10], the authors have addressed the use of autostereoscopic volumetric visualization of the patient's anatomy, which has the potential to be combined with augmented reality. The paper especially addresses the latency problem in the visualization chain, and a few improvements have been proposed. A new adjacent application has been presented in In summary, we have seen from submissions to this special issue a growing interest in applying advanced signal and image processing technologies to image-guided interventions. The submissions have covered a wide range of clinical applications using various imaging modalities. Image feature extraction remains to be an important subject and it has to be specifically designed to suit the needs for specific applications. Learning-based approaches have also attracted a lot of attention, especially in applications requiring automatic tissue characterization and classification. We are also very happy to have received new emerging applications which are able to extend the traditional interventional imaging into greater application areas. Acknowledgments We would like to thank all the reviewers who have helped to peer-review the submitted papers and their constructive comments are well appreciated

    Novel developments in endoscopic mucosal imaging

    Get PDF
    Endoscopic techniques such as High-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance

    Towards real-world clinical colonoscopy deep learning models for video-based bowel preparation and generalisable polyp segmentation

    Get PDF
    Colorectal cancer is the most prevalence type of cancers within the digestive system. Early screening and removal of precancerous growths in the colon decrease mortality rate. The golden standard screening type for colon is colonoscopy which is conducted by a medical expert (i.e., colonoscopist). Nevertheless, due to human biases, fatigue, and experience level of the colonoscopist, colorectal cancer missing rate is negatively affected. Artificial intelligence (AI) methods hold immense promise not just in automating colonoscopy tasks but also enhancing the performance of colonoscopy screening in general. The recent development of intense computational GPUs enabled a computational-demanding AI method (i.e., deep learning) to be utilised in various medical applications. However, given the gap between the clinical-practice and the proposed deep learning models in the literature, the actual effectiveness of such methods is questionable. Hence, this thesis highlights such gaps that arises from the separation between the theoretical and practical aspect of deep learning methods applied to colonoscopy. The aim is to evaluate the current state of deep learning models applied in colonoscopy from a clinical angle, and accordingly propose better evaluation strategies and deep learning models. The aim is translated into three distinct objectives. The first objective is to develop a systematic evaluation method to assess deep learning models from a clinical perspective. The second objective is to develop a novel deep learning architecture that leverages spatial information within colonoscopy videos to enhance the effectiveness of deep learning models on real-clinical environments. The third objective is to enhance the generalisability of deep learning models on unseen test images by developing a novel deep learning framework. To translate these objectives into practice, two critical colonoscopy tasks, namely, automatic bowel preparation and polyp segmentation are attacked. In both tasks, subtle overestimations are found in the literature and discussed in the thesis theoretically and demonstrated empirically. These overestimations are induced by improper validation sets that would not appear or represent the real-world clinical environment. Arbitrary dividing colonoscopy datasets to do deep learning evaluation can result in producing similar distributions, hence, achieving unrealistic results. Accordingly, these factors are considered in the thesis to avoid such subtle overestimation. For the automatic bowel preparation task, colonoscopy videos that closely resemble clinical settings are considered as input and accordingly it necessitates the design of the proposed model as well as evaluation experiments. The proposed model’s architecture is designed to utilise both temporal and spatial information within colonoscopy videos using Gated Recurrent Unit (GRU) and a proposed Multiplexer unit, respectively. Meanwhile for the polyp segmentation task, the efficiency of current deep learning models is tested in terms of their generalisation capabilities using unseen test sets from different medical centres. The proposed framework consists of two connected models. The first model is responsible for gradually transforming textures of input images and arbitrary change their colours. Meanwhile the second model is a segmentation model that outlines polyp regions. Exposing the segmentation model to such transformed images acquires the segmentation model texture/colour invariant properties, hence, enhances the generalisability of the segmentation model. In this thesis, rigorous experiments are conducted to evaluate the proposed models against the state-of-the-art models. The yielded results indicate that the proposed models outperformed the state-of-the-art models under different settings

    A deep learning system for detection of early Barrett's neoplasia:a model development and validation study

    Get PDF
    BACKGROUND: Computer-aided detection (CADe) systems could assist endoscopists in detecting early neoplasia in Barrett's oesophagus, which could be difficult to detect in endoscopic images. The aim of this study was to develop, test, and benchmark a CADe system for early neoplasia in Barrett's oesophagus.METHODS: The CADe system was first pretrained with ImageNet followed by domain-specific pretraining with GastroNet. We trained the CADe system on a dataset of 14 046 images (2506 patients) of confirmed Barrett's oesophagus neoplasia and non-dysplastic Barrett's oesophagus from 15 centres. Neoplasia was delineated by 14 Barrett's oesophagus experts for all datasets. We tested the performance of the CADe system on two independent test sets. The all-comers test set comprised 327 (73 patients) non-dysplastic Barrett's oesophagus images, 82 (46 patients) neoplastic images, 180 (66 of the same patients) non-dysplastic Barrett's oesophagus videos, and 71 (45 of the same patients) neoplastic videos. The benchmarking test set comprised 100 (50 patients) neoplastic images, 300 (125 patients) non-dysplastic images, 47 (47 of the same patients) neoplastic videos, and 141 (82 of the same patients) non-dysplastic videos, and was enriched with subtle neoplasia cases. The benchmarking test set was evaluated by 112 endoscopists from six countries (first without CADe and, after 6 weeks, with CADe) and by 28 external international Barrett's oesophagus experts. The primary outcome was the sensitivity of Barrett's neoplasia detection by general endoscopists without CADe assistance versus with CADe assistance on the benchmarking test set. We compared sensitivity using a mixed-effects logistic regression model with conditional odds ratios (ORs; likelihood profile 95% CIs).FINDINGS: Sensitivity for neoplasia detection among endoscopists increased from 74% to 88% with CADe assistance (OR 2·04; 95% CI 1·73-2·42; p&lt;0·0001 for images and from 67% to 79% [2·35; 1·90-2·94; p&lt;0·0001] for video) without compromising specificity (from 89% to 90% [1·07; 0·96-1·19; p=0·20] for images and from 96% to 94% [0·94; 0·79-1·11; ] for video; p=0·46). In the all-comers test set, CADe detected neoplastic lesions in 95% (88-98) of images and 97% (90-99) of videos. In the benchmarking test set, the CADe system was superior to endoscopists in detecting neoplasia (90% vs 74% [OR 3·75; 95% CI 1·93-8·05; p=0·0002] for images and 91% vs 67% [11·68; 3·85-47·53; p&lt;0·0001] for video) and non-inferior to Barrett's oesophagus experts (90% vs 87% [OR 1·74; 95% CI 0·83-3·65] for images and 91% vs 86% [2·94; 0·99-11·40] for video).INTERPRETATION: CADe outperformed endoscopists in detecting Barrett's oesophagus neoplasia and, when used as an assistive tool, it improved their detection rate. CADe detected virtually all neoplasia in a test set of consecutive cases.FUNDING: Olympus.</p
    corecore