110 research outputs found
PICCOLO White-Light and Narrow-Band Imaging Colonoscopic Dataset: A Performance Comparative of Models and Datasets
Colorectal cancer is one of the world leading death causes. Fortunately, an early diagnosis
allows for e_ective treatment, increasing the survival rate. Deep learning techniques have shown
their utility for increasing the adenoma detection rate at colonoscopy, but a dataset is usually required
so the model can automatically learn features that characterize the polyps. In this work, we present
the PICCOLO dataset, that comprises 3433 manually annotated images (2131 white-light images
1302 narrow-band images), originated from 76 lesions from 40 patients, which are distributed into
training (2203), validation (897) and test (333) sets assuring patient independence between sets.
Furthermore, clinical metadata are also provided for each lesion. Four di_erent models, obtained by
combining two backbones and two encoderโdecoder architectures, are trained with the PICCOLO
dataset and other two publicly available datasets for comparison. Results are provided for the test
set of each dataset. Models trained with the PICCOLO dataset have a better generalization capacity,
as they perform more uniformly along test sets of all datasets, rather than obtaining the best results for
its own test set. This dataset is available at the website of the Basque Biobank, so it is expected that it
will contribute to the further development of deep learning methods for polyp detection, localisation
and classification, which would eventually result in a better and earlier diagnosis of colorectal cancer,
hence improving patient outcomes.This work was partially supported by PICCOLO project. This project has received funding from the European Unionโs Horizon2020 research and innovation programme under grant agreement No 732111.
Furthermore, this publication has also been partially supported
by GR18199 from Consejerรญa de Economรญa, Ciencia y Agenda Digital of Junta de Extremadura (co-funded by
European Regional Development FundโERDF. โA way to make Europeโ/ โInvesting in your futureโ. This work
has been performed by the ICTS โNANBIOSISโ at the Jesรบs Usรณn Minimally Invasive Surgery Centre
Recommended from our members
Automatic Segmentation of Polyps in Colonoscopic Narrow-Band Imaging Data
Colorectal cancer is the third most common type of cancer worldwide. However, this disease can be prevented by detection and removal of precursor adenomatous polyps during optical colonoscopy (OC). During OC, the endoscopist looks for colon polyps. While hyperplastic polyps are benign lesions, adenomatous polyps are likely to become cancerous. Hence, it is a common practice to remove all identified polyps and send them to subsequent histological analysis. But removal of hyperplastic polyps poses unnecessary risk to patients and incurs unnecessary costs for histological analysis. In this paper, we develop the first part of a novel optical biopsy application based on narrow-band imaging (NBI). A barrier to an automatic system is that polyp classification algorithms require manual segmentations of the polyps, so we automatically segment polyps in colonoscopic NBI data. We propose an algorithm, Shape-UCM, which is an extension of the gPb-OWT-UCM algorithm, a state-of-the-art algorithm for boundary detection and segmentation. Shape-UCM solves the intrinsic scale selection problem of gPb-OWT-UCM by including prior knowledge about the shape of the polyps. Shape-UCM outperforms previous methods with a specificity of 92%, a sensitivity of 71%, and an accuracy of 88% for automatic segmentation of a test set of 87 images
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures
์์์ ๊ธฐ ํฅ์์ ์ํ ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ ์ฐ๊ตฌ: ๋์ฅ๋ด์๊ฒฝ ์ง๋จ ๋ฐ ๋ก๋ด์์ ์ ๊ธฐ ํ๊ฐ์ ์ ์ฉ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ํ๋๊ณผ์ ์์ฉ์์ฒด๊ณตํ์ ๊ณต, 2020. 8. ๊นํฌ์ฐฌ.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated.
In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly.
In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods.
In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.๋ณธ ๋
ผ๋ฌธ์ ์๋ฃ์ง์ ์์์ ๊ธฐ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ์๋ก์ด ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ๋ค์ ์ ์ํ๊ณ ๋ค์ ๋ ๊ฐ์ง ์ค๋ก์ ๋ํด ์ ์ฉํ์ฌ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ํ๊ฐํ์๋ค.
์ฒซ ๋ฒ์งธ ์ฐ๊ตฌ์์๋ ๋์ฅ๋ด์๊ฒฝ์ผ๋ก ๊ดํ ์ง๋จ ์, ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ์ ์ฉ์ข
๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ๊ณ , ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ ๋ฅ๋ ฅ ํฅ์ ์ฌ๋ถ๋ฅผ ๊ฒ์ฆํ๊ณ ์ ํ์๋ค. ๋์ฅ๋ด์๊ฒฝ ๊ฒ์ฌ๋ก ์์ข
์ผ๋ก ์ฆ์ํ ์ ์๋ ์ ์ข
๊ณผ ๊ณผ์ฆ์์ฑ ์ฉ์ข
์ ์ง๋จํ๋ ๊ฒ์ ์ค์ํ๋ค. ๋ณธ ์ฐ๊ตฌ์์๋ ํ๋์ญ ์์ ๋ด์๊ฒฝ์ผ๋ก ์ดฌ์ํ ๋์ฅ ์ฉ์ข
์์์ผ๋ก ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง์ ํ์ตํ์ฌ ๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์๋ค. ์ ์ํ๋ ์๊ณ ๋ฆฌ์ฆ์ ์๋ ๊ธฐ๊ณํ์ต (AutoML) ๋ฐฉ๋ฒ์ผ๋ก, ๋์ฅ ์ฉ์ข
์์์ ์ต์ ํ๋ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๊ตฌ์กฐ๋ฅผ ์ฐพ๊ณ ์ ๊ฒฝ๋ง์ ๊ฐ์ค์น๋ฅผ ํ์ตํ์๋ค. ๋ํ ๊ธฐ์ธ๊ธฐ-๊ฐ์ค์น ํด๋์ค ํ์ฑํ ๋งตํ ๊ธฐ๋ฒ์ ์ด์ฉํ์ฌ ๊ฐ๋ฐํ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๊ฒฐ๊ณผ์ ํ๋ฅ ์ ๊ทผ๊ฑฐ๋ฅผ ์ฉ์ข
์์น์ ์๊ฐ์ ์ผ๋ก ๋ํ๋๋๋ก ํจ์ผ๋ก ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ์ ๋๋๋ก ํ์๋ค. ๋ง์ง๋ง์ผ๋ก, ์๋ จ๋ ๊ทธ๋ฃน๋ณ๋ก ๋ด์๊ฒฝ ์ ๋ฌธ์๊ฐ ์ฉ์ข
๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฒฐ๊ณผ๋ฅผ ์ฐธ๊ณ ํ์์ ๋ ์ง๋จ ๋ฅ๋ ฅ์ด ํฅ์๋์๋์ง ๋น๊ต ์คํ์ ์งํํ์๊ณ , ๋ชจ๋ ๊ทธ๋ฃน์์ ์ ์๋ฏธํ๊ฒ ์ง๋จ ์ ํ๋๊ฐ ํฅ์๋๊ณ ์ง๋จ ์๊ฐ์ด ๋จ์ถ๋์์์ ํ์ธํ์๋ค.
๋ ๋ฒ์งธ ์ฐ๊ตฌ์์๋ ๋ก๋ด์์ ๋์์์์ ์์ ๋๊ตฌ ์์น ์ถ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ๊ณ , ํ๋ํ ์์ ๋๊ตฌ์ ์์ง์ ์ ๋ณด๋ฅผ ๋ฐํ์ผ๋ก ์์ ์์ ์๋ จ๋๋ฅผ ์ ๋์ ์ผ๋ก ํ๊ฐํ๋ ๋ชจ๋ธ์ ์ ์ํ์๋ค. ์์ ๋๊ตฌ์ ์์ง์์ ์์ ์์ ๋ก๋ด์์ ์๋ จ๋๋ฅผ ํ๊ฐํ๊ธฐ ์ํ ์ฃผ์ํ ์ ๋ณด์ด๋ค. ๋ฐ๋ผ์ ๋ณธ ์ฐ๊ตฌ๋ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ์ ์๋ ์์ ๋๊ตฌ ์ถ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์์ผ๋ฉฐ, ๋ค์ ๋๊ฐ์ง ์ ํ์ฐ๊ตฌ์ ํ๊ณ์ ์ ๊ทน๋ณตํ์๋ค. ์ธ์คํด์ค ๋ถํ (Instance Segmentation) ํ๋ ์์์ ๊ฐ๋ฐํ์ฌ ํ์ (Occlusion) ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ์๊ณ , ์ถ์ ๊ธฐ (Tracker)์ ์ฌ์๋ณํ (Re-Identification) ์๊ณ ๋ฆฌ์ฆ์ผ๋ก ๊ตฌ์ฑ๋ ์ถ์ ํ๋ ์์์ ๊ฐ๋ฐํ์ฌ ๋์์์์ ์ถ์ ํ๋ ์์ ๋๊ตฌ์ ์ข
๋ฅ๊ฐ ์ ์ง๋๋๋ก ํ์๋ค. ๋ํ ๋ก๋ด์์ ๋์์์ ํน์์ฑ์ ๊ณ ๋ คํ์ฌ ์์ ๋๊ตฌ์ ์์ง์์ ํ๋ํ๊ธฐ์ํด ์์ ๋๊ตฌ ๋ ์์น์ ๋ก๋ด ํ-์ธ๋์ผ์ดํฐ (Arm-Indicator) ์ธ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์๋ค. ์ ์ํ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ฑ๋ฅ์ ์์ธกํ ์์ ๋๊ตฌ ๋ ์์น์ ์ ๋ต ์์น ๊ฐ์ ํ๊ท ์ ๊ณฑ๊ทผ ์ค์ฐจ, ๊ณก์ ์๋ ๋ฉด์ , ํผ์ด์จ ์๊ด๋ถ์์ผ๋ก ํ๊ฐํ์๋ค. ๋ง์ง๋ง์ผ๋ก, ์์ ๋๊ตฌ์ ์์ง์์ผ๋ก๋ถํฐ ์์ง์ ์งํ๋ฅผ ๊ณ์ฐํ๊ณ ์ด๋ฅผ ๋ฐํ์ผ๋ก ๊ธฐ๊ณํ์ต ๊ธฐ๋ฐ์ ๋ก๋ด์์ ์๋ จ๋ ํ๊ฐ ๋ชจ๋ธ์ ๊ฐ๋ฐํ์๋ค. ๊ฐ๋ฐํ ํ๊ฐ ๋ชจ๋ธ์ ๊ธฐ์กด์ Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) ํ๊ฐ ๋ฐฉ๋ฒ๊ณผ ์ ์ฌํ ์ฑ๋ฅ์ ๋ณด์์ ํ์ธํ์๋ค.
๋ณธ ๋
ผ๋ฌธ์ ์๋ฃ์ง์ ์์์ ๊ธฐ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ๋์ฅ ์ฉ์ข
์์๊ณผ ๋ก๋ด์์ ๋์์์ ๋ฅ๋ฌ๋ ๊ธฐ์ ์ ์ ์ฉํ๊ณ ๊ทธ ์ ํจ์ฑ์ ํ์ธํ์์ผ๋ฉฐ, ํฅํ์ ์ ์ํ๋ ๋ฐฉ๋ฒ์ด ์์์์ ์ฌ์ฉ๋๊ณ ์๋ ์ง๋จ ๋ฐ ํ๊ฐ ๋ฐฉ๋ฒ์ ๋์์ด ๋ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๋ค.Chapter 1 General Introduction 1
1.1 Deep Learning for Medical Image Analysis 1
1.2 Deep Learning for Colonoscipic Diagnosis 2
1.3 Deep Learning for Robotic Surgical Skill Assessment 3
1.4 Thesis Objectives 5
Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7
2.1 Introduction 7
2.1.1 Background 7
2.1.2 Needs 8
2.1.3 Related Work 9
2.2 Methods 11
2.2.1 Study Design 11
2.2.2 Dataset 14
2.2.3 Preprocessing 17
2.2.4 Convolutional Neural Networks (CNN) 21
2.2.4.1 Standard CNN 21
2.2.4.2 Search for CNN Architecture 22
2.2.4.3 Searched CNN Training 23
2.2.4.4 Visual Explanation 24
2.2.5 Evaluation of CNN and Endoscopist Performances 25
2.3 Experiments and Results 27
2.3.1 CNN Performance 27
2.3.2 Results of Visual Explanation 31
2.3.3 Endoscopist with CNN Performance 33
2.4 Discussion 45
2.4.1 Research Significance 45
2.4.2 Limitations 47
2.5 Conclusion 49
Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50
3.1 Introduction 50
3.1.1 Background 50
3.1.2 Needs 51
3.1.3 Related Work 52
3.2 Methods 56
3.2.1 Study Design 56
3.2.2 Dataset 59
3.2.3 Instance Segmentation Framework 63
3.2.4 Tracking Framework 66
3.2.4.1 Tracker 66
3.2.4.2 Re-identification 68
3.2.5 Surgical Instrument Tip Detection 69
3.2.6 Arm-Indicator Recognition 71
3.2.7 Surgical Skill Prediction Model 71
3.3 Experiments and Results 78
3.3.1 Performance of Instance Segmentation Framework 78
3.3.2 Performance of Tracking Framework 82
3.3.3 Evaluation of Surgical Instruments Trajectory 83
3.3.4 Evaluation of Surgical Skill Prediction Model 86
3.4 Discussion 90
3.4.1 Research Significance 90
3.4.2 Limitations 92
3.5 Conclusion 96
Chapter 4 Summary and Future Works 97
4.1 Thesis Summary 97
4.2 Limitations and Future Works 98
Bibliography 100
Abstract in Korean 116
Acknowledgement 119Docto
Deep Learning for Improved Polyp Detection from Synthetic Narrow-Band Imaging
To cope with the growing prevalence of colorectal cancer (CRC), screening
programs for polyp detection and removal have proven their usefulness.
Colonoscopy is considered the best-performing procedure for CRC screening. To
ease the examination, deep learning based methods for automatic polyp detection
have been developed for conventional white-light imaging (WLI). Compared with
WLI, narrow-band imaging (NBI) can improve polyp classification during
colonoscopy but requires special equipment. We propose a CycleGAN-based
framework to convert images captured with regular WLI to synthetic NBI (SNBI)
as a pre-processing method for improving object detection on WLI when NBI is
unavailable. This paper first shows that better results for polyp detection can
be achieved on NBI compared to a relatively similar dataset of WLI. Secondly,
experimental results demonstrate that our proposed modality translation can
achieve improved polyp detection on SNBI images generated from WLI compared to
the original WLI. This is because our WLI-to-SNBI translation model can enhance
the observation of polyp surface patterns in the generated SNBI images
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures
Building up the Future of Colonoscopy โ A Synergy between Clinicians and Computer Scientists
Recent advances in endoscopic technology have generated an increasing interest in strengthening the collaboration between clinicians and computers scientist to develop intelligent systems that can provide additional information to clinicians in the different stages of an intervention. The objective of this chapter is to identify clinical drawbacks of colonoscopy in order to define potential areas of collaboration. Once areas are defined, we present the challenges that colonoscopy images present in order computational methods to provide with meaningful output, including those related to image formation and acquisition, as they are proven to have an impact in the performance of an intelligent system. Finally, we also propose how to define validation frameworks in order to assess the performance of a given method, making an special emphasis on how databases should be created and annotated and which metrics should be used to evaluate systems correctly
Artificial intelligence and computer-aided diagnosis in colonoscopy: current evidence and future directions
Computer-aided diagnosis offers a promising solution to reduce variation in colonoscopy performance. Pooled miss rates for polyps are as high as 22%, and associated interval colorectal cancers after colonoscopy are of concern. Optical biopsy, whereby in-vivo classification of polyps based on enhanced imaging replaces histopathology, has not been incorporated into routine practice because it is limited by interobserver variability and generally only meets accepted standards in expert settings. Real-time decision-support software has been developed to detect and characterise polyps, and also to offer feedback on the technical quality of inspection. Some of the current algorithms, particularly with recent advances in artificial intelligence techniques, match human expert performance for optical biopsy. In this Review, we summarise the evidence for clinical applications of computer-aided diagnosis and artificial intelligence in colonoscopy
- โฆ