1,060 research outputs found
A deep learning framework for quality assessment and restoration in video endoscopy
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. Artifacts such as motion blur, bubbles,
specular reflections, floating objects and pixel saturation impede the visual
interpretation and the automated analysis of endoscopy videos. Given the
widespread use of endoscopy in different clinical applications, we contend that
the robust and reliable identification of such artifacts and the automated
restoration of corrupted video frames is a fundamental medical imaging problem.
Existing state-of-the-art methods only deal with the detection and restoration
of selected artifacts. However, typically endoscopy videos contain numerous
artifacts which motivates to establish a comprehensive solution.
We propose a fully automatic framework that can: 1) detect and classify six
different primary artifacts, 2) provide a quality score for each frame and 3)
restore mildly corrupted frames. To detect different artifacts our framework
exploits fast multi-scale, single stage convolutional neural network detector.
We introduce a quality metric to assess frame quality and predict image
restoration success. Generative adversarial networks with carefully chosen
regularization are finally used to restore corrupted frames.
Our detector yields the highest mean average precision (mAP at 5% threshold)
of 49.0 and the lowest computational time of 88 ms allowing for accurate
real-time processing. Our restoration models for blind deblurring, saturation
correction and inpainting demonstrate significant improvements over previous
methods. On a set of 10 test videos we show that our approach preserves an
average of 68.7% which is 25% more frames than that retained from the raw
videos.Comment: 14 page
Design and Development of an Automatic Blood Detection System for Capsule Endoscopy Images
Wireless Capsule Endoscopy is a technique that allows for
observation of the entire gastrointestinal tract in an easy and non-invasive
way. However, its greatest limitation lies in the time required to analyze
the large number of images generated in each examination for diagnosis,
which is about 2 hours. This causes not only a high cost, but also a high
probability of a wrong diagnosis due to the physician’s fatigue, while the
variable appearance of abnormalities requires continuous concentration.
In this work, we designed and developed a system capable of automatically detecting blood based on classification of extracted regions, following two different classification approaches. The first method consisted
in extraction of hand-crafted features that were used to train machine
learning algorithms, specifically Support Vector Machines and Random
Forest, to create models for classifying images as healthy tissue or blood.
The second method consisted in applying deep learning techniques, concretely convolutional neural networks, capable of extracting the relevant
features of the image by themselves. The best results (95.7% sensitivity
and 92.3% specificity) were obtained for a Random Forest model trained
with features extracted from the histograms of the three HSV color space
channels. For both methods we extracted square patches of several sizes
using a sliding window, while for the first approach we also implemented
the waterpixels technique in order to improve the classification resultsThis work was funded by the European Unions H2020:
MSCA: ITN program for the “Wireless In-body Environment Communication
WiBEC” project under the grant agreement no. 675353. Additionally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the
Titan V GPU used for this research.Pons Suñer, P.; Noorda, R.; Nevárez, A.; Colomer, A.; Pons Beltrán, V.; Naranjo, V. (2019). Design and Development of an Automatic Blood Detection System for Capsule Endoscopy Images. En Lecture Notes in Artificial Intelligence. Springer. 105-113. https://doi.org/10.1007/978-3-030-33617-2_12S105113Berens, J., Finlayson, G.D., Qiu, G.: Image indexing using compressed colour histograms. IEE Proc. Vis., Image Signal Process. 147(4), 349–355 (2000). https://doi.org/10.1049/ip-vis:20000630Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324Buscaglia, J.M., et al.: Performance characteristics of the suspected blood indicator feature in capsule endoscopy according to indication for study. Clin. Gastroenterol. Hepatol. 6(3), 298–301 (2008). https://doi.org/10.1016/j.cgh.2007.12.029Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995). https://doi.org/10.1007/BF00994018Li, B., Meng, M.Q.H.: Computer-aided detection of bleeding regions for capsule endoscopy images. IEEE Trans. Biomed. Eng. 56(4), 1032–1039 (2009). https://doi.org/10.1109/TBME.2008.2010526Machairas, V., Faessel, M., Cárdenas-Peña, D., Chabardes, T., Walter, T., Decencière, E.: Waterpixels. IEEE Trans. Image Process. 24(11), 3707–3716 (2015). https://doi.org/10.1109/TIP.2015.2451011Novozámskỳ, A., Flusser, J., TachecĂ, I., SulĂk, L., Bureš, J., Krejcar, O.: Automatic blood detection in capsule endoscopy video. J. Biomed. Opt. 21(12), 126007 (2016). https://doi.org/10.1117/1.JBO.21.12.126007Signorelli, C., Villa, F., Rondonotti, E., Abbiati, C., Beccari, G., de Franchis, R.: Sensitivity and specificity of the suspected blood identification system in video capsule enteroscopy. Endoscopy 37(12), 1170–1173 (2005). https://doi.org/10.1055/s-2005-870410Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)Varma, S., Simon, R.: Bias in error estimation when using cross-validation for model selection. BMC Bioinform. 7(1), 91 (2006). https://doi.org/10.1186/1471-2105-7-9
Learning-based classification of informative laryngoscopic frames
Background and Objective: Early-stage diagnosis of laryngeal cancer is of primary importance to reduce patient morbidity. Narrow-band imaging (NBI) endoscopy is commonly used for screening purposes, reducing the risks linked to a biopsy but at the cost of some drawbacks, such as large amount of data to review to make the diagnosis. The purpose of this paper is to present a strategy to perform automatic selection of informative endoscopic video frames, which can reduce the amount of data to process and potentially increase diagnosis performance. Methods: A new method to classify NBI endoscopic frames based on intensity, keypoint and image spatial content features is proposed. Support vector machines with the radial basis function and the one-versus-one scheme are used to classify frames as informative, blurred, with saliva or specular reflections, or underexposed. Results: When tested on a balanced set of 720 images from 18 different laryngoscopic videos, a classification recall of 91% was achieved for informative frames, significantly overcoming three state of the art methods (Wilcoxon rank-signed test, significance level = 0.05). Conclusions: Due to the high performance in identifying informative frames, the approach is a valuable tool to perform informative frame selection, which can be potentially applied in different fields, such us computer-assisted diagnosis and endoscopic view expansion
Unsupervised Odometry and Depth Learning for Endoscopic Capsule Robots
In the last decade, many medical companies and research groups have tried to
convert passive capsule endoscopes as an emerging and minimally invasive
diagnostic technology into actively steerable endoscopic capsule robots which
will provide more intuitive disease detection, targeted drug delivery and
biopsy-like operations in the gastrointestinal(GI) tract. In this study, we
introduce a fully unsupervised, real-time odometry and depth learner for
monocular endoscopic capsule robots. We establish the supervision by warping
view sequences and assigning the re-projection minimization to the loss
function, which we adopt in multi-view pose estimation and single-view depth
estimation network. Detailed quantitative and qualitative analyses of the
proposed framework performed on non-rigidly deformable ex-vivo porcine stomach
datasets proves the effectiveness of the method in terms of motion estimation
and depth recovery.Comment: submitted to IROS 201
- …