169 research outputs found
YONA: You Only Need One Adjacent Reference-frame for Accurate and Fast Video Polyp Detection
Accurate polyp detection is essential for assisting clinical rectal cancer
diagnoses. Colonoscopy videos contain richer information than still images,
making them a valuable resource for deep learning methods. Great efforts have
been made to conduct video polyp detection through multi-frame temporal/spatial
aggregation. However, unlike common fixed-camera video, the camera-moving scene
in colonoscopy videos can cause rapid video jitters, leading to unstable
training for existing video detection models. Additionally, the concealed
nature of some polyps and the complex background environment further hinder the
performance of existing video detectors. In this paper, we propose the
\textbf{YONA} (\textbf{Y}ou \textbf{O}nly \textbf{N}eed one \textbf{A}djacent
Reference-frame) method, an efficient end-to-end training framework for video
polyp detection. YONA fully exploits the information of one previous adjacent
frame and conducts polyp detection on the current frame without multi-frame
collaborations. Specifically, for the foreground, YONA adaptively aligns the
current frame's channel activation patterns with its adjacent reference frames
according to their foreground similarity. For the background, YONA conducts
background dynamic alignment guided by inter-frame difference to eliminate the
invalid features produced by drastic spatial jitters. Moreover, YONA applies
cross-frame contrastive learning during training, leveraging the ground truth
bounding box to improve the model's perception of polyp and background.
Quantitative and qualitative experiments on three public challenging benchmarks
demonstrate that our proposed YONA outperforms previous state-of-the-art
competitors by a large margin in both accuracy and speed.Comment: 11 pages, 3 figures, Accepted by MICCAI202
An Efficient Approach for Polyps Detection in Endoscopic Videos Based on Faster R-CNN
Polyp has long been considered as one of the major etiologies to colorectal
cancer which is a fatal disease around the world, thus early detection and
recognition of polyps plays a crucial role in clinical routines. Accurate
diagnoses of polyps through endoscopes operated by physicians becomes a
challenging task not only due to the varying expertise of physicians, but also
the inherent nature of endoscopic inspections. To facilitate this process,
computer-aid techniques that emphasize fully-conventional image processing and
novel machine learning enhanced approaches have been dedicatedly designed for
polyp detection in endoscopic videos or images. Among all proposed algorithms,
deep learning based methods take the lead in terms of multiple metrics in
evolutions for algorithmic performance. In this work, a highly effective model,
namely the faster region-based convolutional neural network (Faster R-CNN) is
implemented for polyp detection. In comparison with the reported results of the
state-of-the-art approaches on polyps detection, extensive experiments
demonstrate that the Faster R-CNN achieves very competing results, and it is an
efficient approach for clinical practice.Comment: 6 pages, 10 figures,2018 International Conference on Pattern
Recognitio
Colo-SCRL: Self-Supervised Contrastive Representation Learning for Colonoscopic Video Retrieval
Colonoscopic video retrieval, which is a critical part of polyp treatment,
has great clinical significance for the prevention and treatment of colorectal
cancer. However, retrieval models trained on action recognition datasets
usually produce unsatisfactory retrieval results on colonoscopic datasets due
to the large domain gap between them. To seek a solution to this problem, we
construct a large-scale colonoscopic dataset named Colo-Pair for medical
practice. Based on this dataset, a simple yet effective training method called
Colo-SCRL is proposed for more robust representation learning. It aims to
refine general knowledge from colonoscopies through masked autoencoder-based
reconstruction and momentum contrast to improve retrieval performance. To the
best of our knowledge, this is the first attempt to employ the contrastive
learning paradigm for medical video retrieval. Empirical results show that our
method significantly outperforms current state-of-the-art methods in the
colonoscopic video retrieval task.Comment: Accepted by ICME 202
μμμ κΈ° ν₯μμ μν λ₯λ¬λ κΈ°λ² μ°κ΅¬: λμ₯λ΄μκ²½ μ§λ¨ λ° λ‘λ΄μμ μ κΈ° νκ°μ μ μ©
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν νλκ³Όμ μμ©μ체곡νμ 곡, 2020. 8. κΉν¬μ°¬.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated.
In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly.
In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods.
In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.λ³Έ λ
Όλ¬Έμ μλ£μ§μ μμμ κΈ° λ₯λ ₯μ ν₯μμν€κΈ° μνμ¬ μλ‘μ΄ λ₯λ¬λ κΈ°λ²λ€μ μ μνκ³ λ€μ λ κ°μ§ μ€λ‘μ λν΄ μ μ©νμ¬ κ·Έ κ²°κ³Όλ₯Ό νκ°νμλ€.
첫 λ²μ§Έ μ°κ΅¬μμλ λμ₯λ΄μκ²½μΌλ‘ κ΄ν μ§λ¨ μ, λ΄μκ²½ μ λ¬Έμμ μ§λ¨ λ₯λ ₯μ ν₯μμν€κΈ° μνμ¬ λ₯λ¬λ κΈ°λ°μ μ©μ’
λΆλ₯ μκ³ λ¦¬μ¦μ κ°λ°νκ³ , λ΄μκ²½ μ λ¬Έμμ μ§λ¨ λ₯λ ₯ ν₯μ μ¬λΆλ₯Ό κ²μ¦νκ³ μ νμλ€. λμ₯λ΄μκ²½ κ²μ¬λ‘ μμ’
μΌλ‘ μ¦μν μ μλ μ μ’
κ³Ό κ³Όμ¦μμ± μ©μ’
μ μ§λ¨νλ κ²μ μ€μνλ€. λ³Έ μ°κ΅¬μμλ νλμ μμ λ΄μκ²½μΌλ‘ 촬μν λμ₯ μ©μ’
μμμΌλ‘ ν©μ±κ³± μ κ²½λ§μ νμ΅νμ¬ λΆλ₯ μκ³ λ¦¬μ¦μ κ°λ°νμλ€. μ μνλ μκ³ λ¦¬μ¦μ μλ κΈ°κ³νμ΅ (AutoML) λ°©λ²μΌλ‘, λμ₯ μ©μ’
μμμ μ΅μ νλ ν©μ±κ³± μ κ²½λ§ κ΅¬μ‘°λ₯Ό μ°Ύκ³ μ κ²½λ§μ κ°μ€μΉλ₯Ό νμ΅νμλ€. λν κΈ°μΈκΈ°-κ°μ€μΉ ν΄λμ€ νμ±ν 맡ν κΈ°λ²μ μ΄μ©νμ¬ κ°λ°ν ν©μ±κ³± μ κ²½λ§ κ²°κ³Όμ νλ₯ μ κ·Όκ±°λ₯Ό μ©μ’
μμΉμ μκ°μ μΌλ‘ λνλλλ‘ ν¨μΌλ‘ λ΄μκ²½ μ λ¬Έμμ μ§λ¨μ λλλ‘ νμλ€. λ§μ§λ§μΌλ‘, μλ ¨λ κ·Έλ£Ήλ³λ‘ λ΄μκ²½ μ λ¬Έμκ° μ©μ’
λΆλ₯ μκ³ λ¦¬μ¦μ κ²°κ³Όλ₯Ό μ°Έκ³ νμμ λ μ§λ¨ λ₯λ ₯μ΄ ν₯μλμλμ§ λΉκ΅ μ€νμ μ§ννμκ³ , λͺ¨λ κ·Έλ£Ήμμ μ μλ―Ένκ² μ§λ¨ μ νλκ° ν₯μλκ³ μ§λ¨ μκ°μ΄ λ¨μΆλμμμ νμΈνμλ€.
λ λ²μ§Έ μ°κ΅¬μμλ λ‘λ΄μμ λμμμμ μμ λꡬ μμΉ μΆμ μκ³ λ¦¬μ¦μ κ°λ°νκ³ , νλν μμ λꡬμ μμ§μ μ 보λ₯Ό λ°νμΌλ‘ μμ μμ μλ ¨λλ₯Ό μ λμ μΌλ‘ νκ°νλ λͺ¨λΈμ μ μνμλ€. μμ λꡬμ μμ§μμ μμ μμ λ‘λ΄μμ μλ ¨λλ₯Ό νκ°νκΈ° μν μ£Όμν μ 보μ΄λ€. λ°λΌμ λ³Έ μ°κ΅¬λ λ₯λ¬λ κΈ°λ°μ μλ μμ λꡬ μΆμ μκ³ λ¦¬μ¦μ κ°λ°νμμΌλ©°, λ€μ λκ°μ§ μ νμ°κ΅¬μ νκ³μ μ 극볡νμλ€. μΈμ€ν΄μ€ λΆν (Instance Segmentation) νλ μμμ κ°λ°νμ¬ νμ (Occlusion) λ¬Έμ λ₯Ό ν΄κ²°νμκ³ , μΆμ κΈ° (Tracker)μ μ¬μλ³ν (Re-Identification) μκ³ λ¦¬μ¦μΌλ‘ ꡬμ±λ μΆμ νλ μμμ κ°λ°νμ¬ λμμμμ μΆμ νλ μμ λꡬμ μ’
λ₯κ° μ μ§λλλ‘ νμλ€. λν λ‘λ΄μμ λμμμ νΉμμ±μ κ³ λ €νμ¬ μμ λꡬμ μμ§μμ νλνκΈ°μν΄ μμ λꡬ λ μμΉμ λ‘λ΄ ν-μΈλμΌμ΄ν° (Arm-Indicator) μΈμ μκ³ λ¦¬μ¦μ κ°λ°νμλ€. μ μνλ μκ³ λ¦¬μ¦μ μ±λ₯μ μμΈ‘ν μμ λꡬ λ μμΉμ μ λ΅ μμΉ κ°μ νκ· μ κ³±κ·Ό μ€μ°¨, 곑μ μλ λ©΄μ , νΌμ΄μ¨ μκ΄λΆμμΌλ‘ νκ°νμλ€. λ§μ§λ§μΌλ‘, μμ λꡬμ μμ§μμΌλ‘λΆν° μμ§μ μ§νλ₯Ό κ³μ°νκ³ μ΄λ₯Ό λ°νμΌλ‘ κΈ°κ³νμ΅ κΈ°λ°μ λ‘λ΄μμ μλ ¨λ νκ° λͺ¨λΈμ κ°λ°νμλ€. κ°λ°ν νκ° λͺ¨λΈμ κΈ°μ‘΄μ Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) νκ° λ°©λ²κ³Ό μ μ¬ν μ±λ₯μ 보μμ νμΈνμλ€.
λ³Έ λ
Όλ¬Έμ μλ£μ§μ μμμ κΈ° λ₯λ ₯μ ν₯μμν€κΈ° μνμ¬ λμ₯ μ©μ’
μμκ³Ό λ‘λ΄μμ λμμμ λ₯λ¬λ κΈ°μ μ μ μ©νκ³ κ·Έ μ ν¨μ±μ νμΈνμμΌλ©°, ν₯νμ μ μνλ λ°©λ²μ΄ μμμμ μ¬μ©λκ³ μλ μ§λ¨ λ° νκ° λ°©λ²μ λμμ΄ λ κ²μΌλ‘ κΈ°λνλ€.Chapter 1 General Introduction 1
1.1 Deep Learning for Medical Image Analysis 1
1.2 Deep Learning for Colonoscipic Diagnosis 2
1.3 Deep Learning for Robotic Surgical Skill Assessment 3
1.4 Thesis Objectives 5
Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7
2.1 Introduction 7
2.1.1 Background 7
2.1.2 Needs 8
2.1.3 Related Work 9
2.2 Methods 11
2.2.1 Study Design 11
2.2.2 Dataset 14
2.2.3 Preprocessing 17
2.2.4 Convolutional Neural Networks (CNN) 21
2.2.4.1 Standard CNN 21
2.2.4.2 Search for CNN Architecture 22
2.2.4.3 Searched CNN Training 23
2.2.4.4 Visual Explanation 24
2.2.5 Evaluation of CNN and Endoscopist Performances 25
2.3 Experiments and Results 27
2.3.1 CNN Performance 27
2.3.2 Results of Visual Explanation 31
2.3.3 Endoscopist with CNN Performance 33
2.4 Discussion 45
2.4.1 Research Significance 45
2.4.2 Limitations 47
2.5 Conclusion 49
Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50
3.1 Introduction 50
3.1.1 Background 50
3.1.2 Needs 51
3.1.3 Related Work 52
3.2 Methods 56
3.2.1 Study Design 56
3.2.2 Dataset 59
3.2.3 Instance Segmentation Framework 63
3.2.4 Tracking Framework 66
3.2.4.1 Tracker 66
3.2.4.2 Re-identification 68
3.2.5 Surgical Instrument Tip Detection 69
3.2.6 Arm-Indicator Recognition 71
3.2.7 Surgical Skill Prediction Model 71
3.3 Experiments and Results 78
3.3.1 Performance of Instance Segmentation Framework 78
3.3.2 Performance of Tracking Framework 82
3.3.3 Evaluation of Surgical Instruments Trajectory 83
3.3.4 Evaluation of Surgical Skill Prediction Model 86
3.4 Discussion 90
3.4.1 Research Significance 90
3.4.2 Limitations 92
3.5 Conclusion 96
Chapter 4 Summary and Future Works 97
4.1 Thesis Summary 97
4.2 Limitations and Future Works 98
Bibliography 100
Abstract in Korean 116
Acknowledgement 119Docto
Endoscopic Polyp Segmentation Using a Hybrid 2D/3D CNN
Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst applied treatment is performed on a real-time video feed. Non-curated video data includes a high proportion of low-quality frames in comparison to selected images but also embeds temporal information that can be used for more stable predictions. To exploit this, a hybrid 2D/3D convolutional neural network architecture is presented. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients. The results show that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm
Enhancing endoscopic navigation and polyp detection using artificial intelligence
Colorectal cancer (CRC) is one most common and deadly forms of cancer. It has a very high mortality rate if the disease advances to late stages however early diagnosis and treatment can be curative is hence essential to enhancing disease management. Colonoscopy is considered the gold standard for CRC screening and early therapeutic treatment. The effectiveness of colonoscopy is highly dependent on the operatorβs skill, as a high level of hand-eye coordination is required to control the endoscope and fully examine the colon wall. Because of this, detection rates can vary between different gastroenterologists and technology have been proposed as solutions to assist disease detection and standardise detection rates. This thesis focuses on developing artificial intelligence algorithms to assist gastroenterologists during colonoscopy with the potential to ensure a baseline standard of quality in CRC screening. To achieve such assistance, the technical contributions develop deep learning methods and architectures for automated endoscopic image analysis to address both the detection of lesions in the endoscopic image and the 3D mapping of the endoluminal environment. The proposed detection models can run in real-time and assist visualization of different polyp types. Meanwhile the 3D reconstruction and mapping models developed are the basis for ensuring that the entire colon has been examined appropriately and to support quantitative measurement of polyp sizes using the image during a procedure. Results and validation studies presented within the thesis demonstrate how the developed algorithms perform on both general scenes and on clinical data. The feasibility of clinical translation is demonstrated for all of the models on endoscopic data from human participants during CRC screening examinations
An objective validation of polyp and instrument segmentation methods in colonoscopy through Medico 2020 polyp segmentation and MedAI 2021 transparency challenges
Automatic analysis of colonoscopy images has been an active field of research
motivated by the importance of early detection of precancerous polyps. However,
detecting polyps during the live examination can be challenging due to various
factors such as variation of skills and experience among the endoscopists, lack
of attentiveness, and fatigue leading to a high polyp miss-rate. Deep learning
has emerged as a promising solution to this challenge as it can assist
endoscopists in detecting and classifying overlooked polyps and abnormalities
in real time. In addition to the algorithm's accuracy, transparency and
interpretability are crucial to explaining the whys and hows of the algorithm's
prediction. Further, most algorithms are developed in private data, closed
source, or proprietary software, and methods lack reproducibility. Therefore,
to promote the development of efficient and transparent methods, we have
organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI:
Transparency in Medical Image Segmentation (MedAI 2021)" competitions. We
present a comprehensive summary and analyze each contribution, highlight the
strength of the best-performing methods, and discuss the possibility of
clinical translations of such methods into the clinic. For the transparency
task, a multi-disciplinary team, including expert gastroenterologists, accessed
each submission and evaluated the team based on open-source practices, failure
case analysis, ablation studies, usability and understandability of evaluations
to gain a deeper understanding of the models' credibility for clinical
deployment. Through the comprehensive analysis of the challenge, we not only
highlight the advancements in polyp and surgical instrument segmentation but
also encourage qualitative evaluation for building more transparent and
understandable AI-based colonoscopy systems
Spatio-temporal classification for polyp diagnosis
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets
- β¦