45 research outputs found

    Study to integrate CNN inside a WCE to realize a screening tool

    Get PDF
    International audienceScreening is a method to improve the earlydetection of colorectal cancer. Now, screening is basedon an immunochemical test that look for blood in faecalsamples, but image is the best modality to detect themarker of colorectal cancer : polyps. In 2003 WirelessCapsule Endoscopy was introduced and opened a way tointegrate automatic image processing to realize a screen-ing tool. In parallel Convolutionnal Neural Networkshave demonstrated their high capacity to detect polyps inmany scientific studies, but fail to be integrable. In thisarticle we present our works to integrate CNN or imageprocessing based on a CNN inside a WCE to realize apowerful screening too

    Искусственный интеллект при колоректальном раке: обзор

    Get PDF
    The study objective: the study objective is to examine the use of artificial intelligence (AI) in the diagnosis, treatment, and prognosis of Colorectal Cancer (CRC) and discuss the future potential of AI in CRC. Material and Methods. The Web of Science, Scopus, PubMed, Medline, and eLIBRARY databases were used to search for the publications. A study on the application of Artificial Intelligence (AI) to the diagnosis, treatment, and prognosis of Colorectal Cancer (CRC) was discovered in more than 100 sources. In the review, data from 83 articles were incorporated. Results. The review article explores the use of artificial intelligence (AI) in medicine, specifically focusing on its applications in colorectal cancer (CRC). It discusses the stages of AI development for CRC, including molecular understanding, image-based diagnosis, drug design, and individualized treatment. The benefits of AI in medical image analysis are highlighted, improving diagnosis accuracy and inspection quality. Challenges in AI development are addressed, such as data standardization and the interpretability of machine learning algorithms. The potential of AI in treatment decision support, precision medicine, and prognosis prediction is discussed, emphasizing the role of AI in selecting optimal treatments and improving surgical precision. Ethical and regulatory considerations in integrating AI are mentioned, including patient trust, data security, and liability in AI-assisted surgeries. The review emphasizes the importance of an AI standard system, dataset standardization, and integrating clinical knowledge into AI algorithms. Overall, the article provides an overview of the current research on AI in CRC diagnosis, treatment, and prognosis, discussing its benefits, challenges, and future prospects in improving medical outcomes.Цель исследования - оценка возможностей использования искусственного интеллекта (ИИ) в диагностике, лечении и прогнозировании колоректального рака (КРР), а также обсуждение потенциала ИИ в лечении КРР. Материал и методы. Проведен поиск научных публикаций в поисковых системах Web of Science, Scopus, PubMed, Medline и eLIBRARY. Было просмотрено более 100 источников по применению ИИ для диагностики, лечения и прогнозирования КРР. В обзор включены данные из 83 статей. Результаты. Проведен анализ литературы, посвященной применению искусственного интеллекта в медицине, особое внимание уделено его использованию при колоректальном раке. Обсуждаются этапы развития ИИ при КРР, включая молекулярную верификацию, лучевую диагностику, разработку лекарств и индивидуальное лечение. Подчеркнуты преимущества ИИ в анализе медицинских изображений, таких как КТ, МРТ и ПЭТ, что повышает точность диагностики. Рассматриваются такие проблемы развития ИИ, как стандартизация данных и интерпретируемость алгоритмов машинного обучения. Подчеркивается роль ИИ в выборе оптимальной тактики лечения и повышении эффективности хирургического вмешательства. Учитываются этические и нормативные аспекты ИИ, включая доверие пациентов, безопасность данных и ответственность в проведении операций с использованием ИИ. Обсуждаются преимущества ИИ в диагностике, лечении и прогнозировании колоректального рака, проблемы и перспективы улучшения результатов лечения

    Deep learning to find colorectal polyps in colonoscopy: A systematic literature review

    Get PDF
    Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.This work was partially supported by PICCOLO project. This project has received funding from the European Union's Horizon2020 Research and Innovation Programme under grant agreement No. 732111. The sole responsibility of this publication lies with the author. The European Union is not responsible for any use that may be made of the information contained therein. The authors would also like to thank Dr. Federico Soria for his support on this manuscript and Dr. José Carlos Marín, from Hospital 12 de Octubre, and Dr. Ángel Calderón and Dr. Francisco Polo, from Hospital de Basurto, for the images in Fig. 4

    임상술기 향상을 위한 딥러닝 기법 연구: 대장내시경 진단 및 로봇수술 술기 평가에 적용

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 협동과정 의용생체공학전공, 2020. 8. 김희찬.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated. In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly. In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods. In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.본 논문은 의료진의 임상술기 능력을 향상시키기 위하여 새로운 딥러닝 기법들을 제안하고 다음 두 가지 실례에 대해 적용하여 그 결과를 평가하였다. 첫 번째 연구에서는 대장내시경으로 광학 진단 시, 내시경 전문의의 진단 능력을 향상시키기 위하여 딥러닝 기반의 용종 분류 알고리즘을 개발하고, 내시경 전문의의 진단 능력 향상 여부를 검증하고자 하였다. 대장내시경 검사로 암종으로 증식할 수 있는 선종과 과증식성 용종을 진단하는 것은 중요하다. 본 연구에서는 협대역 영상 내시경으로 촬영한 대장 용종 영상으로 합성곱 신경망을 학습하여 분류 알고리즘을 개발하였다. 제안하는 알고리즘은 자동 기계학습 (AutoML) 방법으로, 대장 용종 영상에 최적화된 합성곱 신경망 구조를 찾고 신경망의 가중치를 학습하였다. 또한 기울기-가중치 클래스 활성화 맵핑 기법을 이용하여 개발한 합성곱 신경망 결과의 확률적 근거를 용종 위치에 시각적으로 나타나도록 함으로 내시경 전문의의 진단을 돕도록 하였다. 마지막으로, 숙련도 그룹별로 내시경 전문의가 용종 분류 알고리즘의 결과를 참고하였을 때 진단 능력이 향상되었는지 비교 실험을 진행하였고, 모든 그룹에서 유의미하게 진단 정확도가 향상되고 진단 시간이 단축되었음을 확인하였다. 두 번째 연구에서는 로봇수술 동영상에서 수술도구 위치 추적 알고리즘을 개발하고, 획득한 수술도구의 움직임 정보를 바탕으로 수술자의 숙련도를 정량적으로 평가하는 모델을 제안하였다. 수술도구의 움직임은 수술자의 로봇수술 숙련도를 평가하기 위한 주요한 정보이다. 따라서 본 연구는 딥러닝 기반의 자동 수술도구 추적 알고리즘을 개발하였으며, 다음 두가지 선행연구의 한계점을 극복하였다. 인스턴스 분할 (Instance Segmentation) 프레임웍을 개발하여 폐색 (Occlusion) 문제를 해결하였고, 추적기 (Tracker)와 재식별화 (Re-Identification) 알고리즘으로 구성된 추적 프레임웍을 개발하여 동영상에서 추적하는 수술도구의 종류가 유지되도록 하였다. 또한 로봇수술 동영상의 특수성을 고려하여 수술도구의 움직임을 획득하기위해 수술도구 끝 위치와 로봇 팔-인디케이터 (Arm-Indicator) 인식 알고리즘을 개발하였다. 제안하는 알고리즘의 성능은 예측한 수술도구 끝 위치와 정답 위치 간의 평균 제곱근 오차, 곡선 아래 면적, 피어슨 상관분석으로 평가하였다. 마지막으로, 수술도구의 움직임으로부터 움직임 지표를 계산하고 이를 바탕으로 기계학습 기반의 로봇수술 숙련도 평가 모델을 개발하였다. 개발한 평가 모델은 기존의 Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) 평가 방법과 유사한 성능을 보임을 확인하였다. 본 논문은 의료진의 임상술기 능력을 향상시키기 위하여 대장 용종 영상과 로봇수술 동영상에 딥러닝 기술을 적용하고 그 유효성을 확인하였으며, 향후에 제안하는 방법이 임상에서 사용되고 있는 진단 및 평가 방법의 대안이 될 것으로 기대한다.Chapter 1 General Introduction 1 1.1 Deep Learning for Medical Image Analysis 1 1.2 Deep Learning for Colonoscipic Diagnosis 2 1.3 Deep Learning for Robotic Surgical Skill Assessment 3 1.4 Thesis Objectives 5 Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7 2.1 Introduction 7 2.1.1 Background 7 2.1.2 Needs 8 2.1.3 Related Work 9 2.2 Methods 11 2.2.1 Study Design 11 2.2.2 Dataset 14 2.2.3 Preprocessing 17 2.2.4 Convolutional Neural Networks (CNN) 21 2.2.4.1 Standard CNN 21 2.2.4.2 Search for CNN Architecture 22 2.2.4.3 Searched CNN Training 23 2.2.4.4 Visual Explanation 24 2.2.5 Evaluation of CNN and Endoscopist Performances 25 2.3 Experiments and Results 27 2.3.1 CNN Performance 27 2.3.2 Results of Visual Explanation 31 2.3.3 Endoscopist with CNN Performance 33 2.4 Discussion 45 2.4.1 Research Significance 45 2.4.2 Limitations 47 2.5 Conclusion 49 Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50 3.1 Introduction 50 3.1.1 Background 50 3.1.2 Needs 51 3.1.3 Related Work 52 3.2 Methods 56 3.2.1 Study Design 56 3.2.2 Dataset 59 3.2.3 Instance Segmentation Framework 63 3.2.4 Tracking Framework 66 3.2.4.1 Tracker 66 3.2.4.2 Re-identification 68 3.2.5 Surgical Instrument Tip Detection 69 3.2.6 Arm-Indicator Recognition 71 3.2.7 Surgical Skill Prediction Model 71 3.3 Experiments and Results 78 3.3.1 Performance of Instance Segmentation Framework 78 3.3.2 Performance of Tracking Framework 82 3.3.3 Evaluation of Surgical Instruments Trajectory 83 3.3.4 Evaluation of Surgical Skill Prediction Model 86 3.4 Discussion 90 3.4.1 Research Significance 90 3.4.2 Limitations 92 3.5 Conclusion 96 Chapter 4 Summary and Future Works 97 4.1 Thesis Summary 97 4.2 Limitations and Future Works 98 Bibliography 100 Abstract in Korean 116 Acknowledgement 119Docto

    Endoscopic Polyp Segmentation Using a Hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst applied treatment is performed on a real-time video feed. Non-curated video data includes a high proportion of low-quality frames in comparison to selected images but also embeds temporal information that can be used for more stable predictions. To exploit this, a hybrid 2D/3D convolutional neural network architecture is presented. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients. The results show that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm

    Polyp detection on video colonoscopy using a hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    AFP-Net: Realtime Anchor-Free Polyp Detection in Colonoscopy

    Full text link
    Colorectal cancer (CRC) is a common and lethal disease. Globally, CRC is the third most commonly diagnosed cancer in males and the second in females. For colorectal cancer, the best screening test available is the colonoscopy. During a colonoscopic procedure, a tiny camera at the tip of the endoscope generates a video of the internal mucosa of the colon. The video data are displayed on a monitor for the physician to examine the lining of the entire colon and check for colorectal polyps. Detection and removal of colorectal polyps are associated with a reduction in mortality from colorectal cancer. However, the miss rate of polyp detection during colonoscopy procedure is often high even for very experienced physicians. The reason lies in the high variation of polyp in terms of shape, size, textural, color and illumination. Though challenging, with the great advances in object detection techniques, automated polyp detection still demonstrates a great potential in reducing the false negative rate while maintaining a high precision. In this paper, we propose a novel anchor free polyp detector that can localize polyps without using predefined anchor boxes. To further strengthen the model, we leverage a Context Enhancement Module and Cosine Ground truth Projection. Our approach can respond in real time while achieving state-of-the-art performance with 99.36% precision and 96.44% recall

    PraNet: Parallel Reverse Attention Network for Polyp Segmentation

    Get PDF
    Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp. To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy. Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency.Comment: Accepted to MICCAI 202
    corecore