110 research outputs found
Novel developments in endoscopic mucosal imaging
Endoscopic techniques such as High-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance
Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy
Objective: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. Background: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. Methods: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. Results: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026Â s (39Â Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. Conclusion: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.</p
Optical diagnosis of colorectal polyp images using a newly developed computer-aided diagnosis system (CADx) compared with intuitive optical diagnosis
Background Optical diagnosis of colorectal polyps remains challenging. Image-enhancement techniques such as narrow-band imaging and blue-light imaging (BLI) can improve optical diagnosis. We developed and prospectively validated a computer-aided diagnosis system (CADx) using high-definition white-light (HDWL) and BLI images, and compared the system with the optical diagnosis of expert and novice endoscopists.Methods CADx characterized colorectal polyps by exploiting artificial neural networks. Six experts and 13 novices optically diagnosed 60 colorectal polyps based on intuition. After 4 weeks, the same set of images was permuted and optically diagnosed using the BLI Adenoma Serrated International Classification (BASIC).Results CADx had a diagnostic accuracy of 88.3% using HDWL images and 86.7% using BLI images. The overall diagnostic accuracy combining HDWL and BLI (multimodal imaging) was 95.0%, which was significantly higher than that of experts (81.7%, P =0.03) and novices (66.7%, P <0.001). Sensitivity was also higher for CADx (95.6% vs. 61.1% and 55.4%), whereas specificity was higher for experts compared with CADx and novices (95.6% vs. 93.3% and 93.2%). For endoscopists, diagnostic accuracy did not increase when using BASIC, either for experts (intuition 79.5% vs. BASIC 81.7%, P =0.14) or for novices (intuition 66.7% vs. BASIC 66.5%, P =0.95).Conclusion CADx had a significantly higher diagnostic accuracy than experts and novices for the optical diagnosis of colorectal polyps. Multimodal imaging, incorporating both HDWL and BLI, improved the diagnostic accuracy of CADx. BASIC did not increase the diagnostic accuracy of endoscopists compared with intuitive optical diagnosis
Early esophageal adenocarcinoma detection using deep learning methods
Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images.
Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested.
Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83.
Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome
Chloroquine dosing recommendations for pediatric COVID-19 supported by modeling and simulation
As chloroquine (CHQ) is part of the Dutch Centre for Infectious Disease Control COVID-19 experimental treatment guideline, pediatric dosing guidelines are needed. Recent pediatric data suggest that existing WHO dosing guidelines for children with malaria are suboptimal. The aim of our study was to establish best-evidence to inform pediatric CHQ doses for children infected with COVID-19. A previously developed physiologically-based pharmacokinetic (PBPK) model for CHQ was used to simulate exposure in adults and children and verified against published pharmacokinetic data. The COVID-19 recommended adult dosage regimen of 44mg/kg total was tested in adults and children to evaluate the extent of variation in exposure. Based on differences in AUC0-70h the optimal CHQ dose was determined in children of different ages compared to adults. Revised doses were re-introduced into the model to verify that overall CHQ exposure in each age band was within 5% of the predicted adult value. Simulations showed differences in drug exposure in children of different ages and adults when the same body-weight based dose is given. As such, we propose the following total cumulative doses: 35 mg/kg (CHQ base) for children 0-1 month, 47 mg/kg for 1-6 months, 55 mg/kg for 6 months-12 years and 44 mg/kg for adolescents and adults, not to exceed 3300 mg in any patient. Our study supports age-adjusted CHQ dosing in children with COVID-19 in order to avoid suboptimal or toxic doses. The knowledge-driven, model-informed dose selection paradigm can serve as a science-
Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy
OBJECTIVE: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. BACKGROUND: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. METHODS: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. RESULTS: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. CONCLUSION: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies
A deep learning system for detection of early Barrett's neoplasia:a model development and validation study
BACKGROUND: Computer-aided detection (CADe) systems could assist endoscopists in detecting early neoplasia in Barrett's oesophagus, which could be difficult to detect in endoscopic images. The aim of this study was to develop, test, and benchmark a CADe system for early neoplasia in Barrett's oesophagus.METHODS: The CADe system was first pretrained with ImageNet followed by domain-specific pretraining with GastroNet. We trained the CADe system on a dataset of 14 046 images (2506 patients) of confirmed Barrett's oesophagus neoplasia and non-dysplastic Barrett's oesophagus from 15 centres. Neoplasia was delineated by 14 Barrett's oesophagus experts for all datasets. We tested the performance of the CADe system on two independent test sets. The all-comers test set comprised 327 (73 patients) non-dysplastic Barrett's oesophagus images, 82 (46 patients) neoplastic images, 180 (66 of the same patients) non-dysplastic Barrett's oesophagus videos, and 71 (45 of the same patients) neoplastic videos. The benchmarking test set comprised 100 (50 patients) neoplastic images, 300 (125 patients) non-dysplastic images, 47 (47 of the same patients) neoplastic videos, and 141 (82 of the same patients) non-dysplastic videos, and was enriched with subtle neoplasia cases. The benchmarking test set was evaluated by 112 endoscopists from six countries (first without CADe and, after 6 weeks, with CADe) and by 28 external international Barrett's oesophagus experts. The primary outcome was the sensitivity of Barrett's neoplasia detection by general endoscopists without CADe assistance versus with CADe assistance on the benchmarking test set. We compared sensitivity using a mixed-effects logistic regression model with conditional odds ratios (ORs; likelihood profile 95% CIs).FINDINGS: Sensitivity for neoplasia detection among endoscopists increased from 74% to 88% with CADe assistance (OR 2·04; 95% CI 1·73-2·42; p<0·0001 for images and from 67% to 79% [2·35; 1·90-2·94; p<0·0001] for video) without compromising specificity (from 89% to 90% [1·07; 0·96-1·19; p=0·20] for images and from 96% to 94% [0·94; 0·79-1·11; ] for video; p=0·46). In the all-comers test set, CADe detected neoplastic lesions in 95% (88-98) of images and 97% (90-99) of videos. In the benchmarking test set, the CADe system was superior to endoscopists in detecting neoplasia (90% vs 74% [OR 3·75; 95% CI 1·93-8·05; p=0·0002] for images and 91% vs 67% [11·68; 3·85-47·53; p<0·0001] for video) and non-inferior to Barrett's oesophagus experts (90% vs 87% [OR 1·74; 95% CI 0·83-3·65] for images and 91% vs 86% [2·94; 0·99-11·40] for video).INTERPRETATION: CADe outperformed endoscopists in detecting Barrett's oesophagus neoplasia and, when used as an assistive tool, it improved their detection rate. CADe detected virtually all neoplasia in a test set of consecutive cases.FUNDING: Olympus.</p
- …