31 research outputs found
Challenges of 3D Surface Reconstruction in Capsule Endoscopy
There are currently many challenges specific to three-dimensional (3D)
surface reconstruction using capsule endoscopy (CE) images. There are also
challenges specific to viewing the content of CE reconstructed 3D surfaces for
bowel disease diagnosis purposes. In this preliminary work, the author focuses
on the latter and discusses the effects such challenges have on the content of
reconstructed 3D surfaces from CE images. Discussions are divided into two
parts. The first part focuses on the comparison of the content of 3D surfaces
reconstructed using both preprocessed and non-preprocessed CE images. The
second part focuses on the comparison of the content of 3D surfaces viewed at
the same azimuth angles and different elevation angles of the line of sight.
Experiments-based conclusion suggests 3D printing as a solution to the line of
sight and 2D screen visual restrictions.Comment: 5 pages, 3 figure
2D Reconstruction of Small Intestine's Interior Wall
Examining and interpreting of a large number of wireless endoscopic images
from the gastrointestinal tract is a tiresome task for physicians. A practical
solution is to automatically construct a two dimensional representation of the
gastrointestinal tract for easy inspection. However, little has been done on
wireless endoscopic image stitching, let alone systematic investigation. The
proposed new wireless endoscopic image stitching method consists of two main
steps to improve the accuracy and efficiency of image registration. First, the
keypoints are extracted by Principle Component Analysis and Scale Invariant
Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood
Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable
keypoints. Second, the optimal transformation parameters obtained from first
step are fed to the Normalised Mutual Information (NMI) algorithm as an initial
solution. With modified Marquardt-Levenberg search strategy in a multiscale
framework, the NMI can find the optimal transformation parameters in the
shortest time. The proposed methodology has been tested on two different
datasets - one with real wireless endoscopic images and another with images
obtained from Micro-Ball (a new wireless cubic endoscopy system with six image
sensors). The results have demonstrated the accuracy and robustness of the
proposed methodology both visually and quantitatively.Comment: Journal draf
Automatic Small Bowel Tumor Diagnosis by Using Multi-Scale Wavelet-Based Analysis in Wireless Capsule Endoscopy Images
BACKGROUND: Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity.
METHOD: The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis.
RESULTS: The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice
Komputerowa analiza obrazรณw z endoskopu bezprzewodowego dla diagnostyki medycznej.
Jednym z badaล medycznych stosowanych w diagnostyce chorรณb przewodu
pokarmowego jest bezprzewodowa endoskopia kapsuลkowa. Wynikiem badania
jest film, ktรณrego interpretacja przeprowadzana przez lekarza wymaga duลผego
skupienia uwagi, jest dลugotrwaลa i mฤczฤ
ca. Wyniki interpretacji nie sฤ
powtarzalne
โ zaleลผฤ
od wiedzy i doลwiadczenia konkretnego lekarza.
Przedmiotem niniejszej monografii sฤ
opracowane przez autora metody
numeryczne, ktรณrych celem jest analiza obrazรณw cyfrowych z endoskopu bezprzewodowego zwiฤkszajฤ
ce powtarzalnoลฤ, wiarygodnoลฤ oraz obiektywizm
diagnozy medycznej.Wireless capsule endoscopy is one of the medical tests used in diagnosis of
gastrointestinal disorders. A result is a video of internal lumen of gastrointestinal
tract which interpretation carried out by an expert gastroenterologist requires a lot
of attention and is time consuming. The final diagnosis is rarely reproducible โ it
depends on the knowledge and experience of the diagnostic experience of the
expert. The subject of this monograph is presentation and validation of novel
algorithms for wireless endoscope video analysis whose purpose is to improve the
reproducibility, reliability and objectivity of medical diagnosis
์์์ ๊ธฐ ํฅ์์ ์ํ ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ ์ฐ๊ตฌ: ๋์ฅ๋ด์๊ฒฝ ์ง๋จ ๋ฐ ๋ก๋ด์์ ์ ๊ธฐ ํ๊ฐ์ ์ ์ฉ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ํ๋๊ณผ์ ์์ฉ์์ฒด๊ณตํ์ ๊ณต, 2020. 8. ๊นํฌ์ฐฌ.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated.
In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly.
In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods.
In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.๋ณธ ๋
ผ๋ฌธ์ ์๋ฃ์ง์ ์์์ ๊ธฐ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ์๋ก์ด ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ๋ค์ ์ ์ํ๊ณ ๋ค์ ๋ ๊ฐ์ง ์ค๋ก์ ๋ํด ์ ์ฉํ์ฌ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ํ๊ฐํ์๋ค.
์ฒซ ๋ฒ์งธ ์ฐ๊ตฌ์์๋ ๋์ฅ๋ด์๊ฒฝ์ผ๋ก ๊ดํ ์ง๋จ ์, ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ์ ์ฉ์ข
๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ๊ณ , ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ ๋ฅ๋ ฅ ํฅ์ ์ฌ๋ถ๋ฅผ ๊ฒ์ฆํ๊ณ ์ ํ์๋ค. ๋์ฅ๋ด์๊ฒฝ ๊ฒ์ฌ๋ก ์์ข
์ผ๋ก ์ฆ์ํ ์ ์๋ ์ ์ข
๊ณผ ๊ณผ์ฆ์์ฑ ์ฉ์ข
์ ์ง๋จํ๋ ๊ฒ์ ์ค์ํ๋ค. ๋ณธ ์ฐ๊ตฌ์์๋ ํ๋์ญ ์์ ๋ด์๊ฒฝ์ผ๋ก ์ดฌ์ํ ๋์ฅ ์ฉ์ข
์์์ผ๋ก ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง์ ํ์ตํ์ฌ ๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์๋ค. ์ ์ํ๋ ์๊ณ ๋ฆฌ์ฆ์ ์๋ ๊ธฐ๊ณํ์ต (AutoML) ๋ฐฉ๋ฒ์ผ๋ก, ๋์ฅ ์ฉ์ข
์์์ ์ต์ ํ๋ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๊ตฌ์กฐ๋ฅผ ์ฐพ๊ณ ์ ๊ฒฝ๋ง์ ๊ฐ์ค์น๋ฅผ ํ์ตํ์๋ค. ๋ํ ๊ธฐ์ธ๊ธฐ-๊ฐ์ค์น ํด๋์ค ํ์ฑํ ๋งตํ ๊ธฐ๋ฒ์ ์ด์ฉํ์ฌ ๊ฐ๋ฐํ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๊ฒฐ๊ณผ์ ํ๋ฅ ์ ๊ทผ๊ฑฐ๋ฅผ ์ฉ์ข
์์น์ ์๊ฐ์ ์ผ๋ก ๋ํ๋๋๋ก ํจ์ผ๋ก ๋ด์๊ฒฝ ์ ๋ฌธ์์ ์ง๋จ์ ๋๋๋ก ํ์๋ค. ๋ง์ง๋ง์ผ๋ก, ์๋ จ๋ ๊ทธ๋ฃน๋ณ๋ก ๋ด์๊ฒฝ ์ ๋ฌธ์๊ฐ ์ฉ์ข
๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ๊ฒฐ๊ณผ๋ฅผ ์ฐธ๊ณ ํ์์ ๋ ์ง๋จ ๋ฅ๋ ฅ์ด ํฅ์๋์๋์ง ๋น๊ต ์คํ์ ์งํํ์๊ณ , ๋ชจ๋ ๊ทธ๋ฃน์์ ์ ์๋ฏธํ๊ฒ ์ง๋จ ์ ํ๋๊ฐ ํฅ์๋๊ณ ์ง๋จ ์๊ฐ์ด ๋จ์ถ๋์์์ ํ์ธํ์๋ค.
๋ ๋ฒ์งธ ์ฐ๊ตฌ์์๋ ๋ก๋ด์์ ๋์์์์ ์์ ๋๊ตฌ ์์น ์ถ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ๊ณ , ํ๋ํ ์์ ๋๊ตฌ์ ์์ง์ ์ ๋ณด๋ฅผ ๋ฐํ์ผ๋ก ์์ ์์ ์๋ จ๋๋ฅผ ์ ๋์ ์ผ๋ก ํ๊ฐํ๋ ๋ชจ๋ธ์ ์ ์ํ์๋ค. ์์ ๋๊ตฌ์ ์์ง์์ ์์ ์์ ๋ก๋ด์์ ์๋ จ๋๋ฅผ ํ๊ฐํ๊ธฐ ์ํ ์ฃผ์ํ ์ ๋ณด์ด๋ค. ๋ฐ๋ผ์ ๋ณธ ์ฐ๊ตฌ๋ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ์ ์๋ ์์ ๋๊ตฌ ์ถ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์์ผ๋ฉฐ, ๋ค์ ๋๊ฐ์ง ์ ํ์ฐ๊ตฌ์ ํ๊ณ์ ์ ๊ทน๋ณตํ์๋ค. ์ธ์คํด์ค ๋ถํ (Instance Segmentation) ํ๋ ์์์ ๊ฐ๋ฐํ์ฌ ํ์ (Occlusion) ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ์๊ณ , ์ถ์ ๊ธฐ (Tracker)์ ์ฌ์๋ณํ (Re-Identification) ์๊ณ ๋ฆฌ์ฆ์ผ๋ก ๊ตฌ์ฑ๋ ์ถ์ ํ๋ ์์์ ๊ฐ๋ฐํ์ฌ ๋์์์์ ์ถ์ ํ๋ ์์ ๋๊ตฌ์ ์ข
๋ฅ๊ฐ ์ ์ง๋๋๋ก ํ์๋ค. ๋ํ ๋ก๋ด์์ ๋์์์ ํน์์ฑ์ ๊ณ ๋ คํ์ฌ ์์ ๋๊ตฌ์ ์์ง์์ ํ๋ํ๊ธฐ์ํด ์์ ๋๊ตฌ ๋ ์์น์ ๋ก๋ด ํ-์ธ๋์ผ์ดํฐ (Arm-Indicator) ์ธ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ์๋ค. ์ ์ํ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ฑ๋ฅ์ ์์ธกํ ์์ ๋๊ตฌ ๋ ์์น์ ์ ๋ต ์์น ๊ฐ์ ํ๊ท ์ ๊ณฑ๊ทผ ์ค์ฐจ, ๊ณก์ ์๋ ๋ฉด์ , ํผ์ด์จ ์๊ด๋ถ์์ผ๋ก ํ๊ฐํ์๋ค. ๋ง์ง๋ง์ผ๋ก, ์์ ๋๊ตฌ์ ์์ง์์ผ๋ก๋ถํฐ ์์ง์ ์งํ๋ฅผ ๊ณ์ฐํ๊ณ ์ด๋ฅผ ๋ฐํ์ผ๋ก ๊ธฐ๊ณํ์ต ๊ธฐ๋ฐ์ ๋ก๋ด์์ ์๋ จ๋ ํ๊ฐ ๋ชจ๋ธ์ ๊ฐ๋ฐํ์๋ค. ๊ฐ๋ฐํ ํ๊ฐ ๋ชจ๋ธ์ ๊ธฐ์กด์ Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) ํ๊ฐ ๋ฐฉ๋ฒ๊ณผ ์ ์ฌํ ์ฑ๋ฅ์ ๋ณด์์ ํ์ธํ์๋ค.
๋ณธ ๋
ผ๋ฌธ์ ์๋ฃ์ง์ ์์์ ๊ธฐ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ๋์ฅ ์ฉ์ข
์์๊ณผ ๋ก๋ด์์ ๋์์์ ๋ฅ๋ฌ๋ ๊ธฐ์ ์ ์ ์ฉํ๊ณ ๊ทธ ์ ํจ์ฑ์ ํ์ธํ์์ผ๋ฉฐ, ํฅํ์ ์ ์ํ๋ ๋ฐฉ๋ฒ์ด ์์์์ ์ฌ์ฉ๋๊ณ ์๋ ์ง๋จ ๋ฐ ํ๊ฐ ๋ฐฉ๋ฒ์ ๋์์ด ๋ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๋ค.Chapter 1 General Introduction 1
1.1 Deep Learning for Medical Image Analysis 1
1.2 Deep Learning for Colonoscipic Diagnosis 2
1.3 Deep Learning for Robotic Surgical Skill Assessment 3
1.4 Thesis Objectives 5
Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7
2.1 Introduction 7
2.1.1 Background 7
2.1.2 Needs 8
2.1.3 Related Work 9
2.2 Methods 11
2.2.1 Study Design 11
2.2.2 Dataset 14
2.2.3 Preprocessing 17
2.2.4 Convolutional Neural Networks (CNN) 21
2.2.4.1 Standard CNN 21
2.2.4.2 Search for CNN Architecture 22
2.2.4.3 Searched CNN Training 23
2.2.4.4 Visual Explanation 24
2.2.5 Evaluation of CNN and Endoscopist Performances 25
2.3 Experiments and Results 27
2.3.1 CNN Performance 27
2.3.2 Results of Visual Explanation 31
2.3.3 Endoscopist with CNN Performance 33
2.4 Discussion 45
2.4.1 Research Significance 45
2.4.2 Limitations 47
2.5 Conclusion 49
Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50
3.1 Introduction 50
3.1.1 Background 50
3.1.2 Needs 51
3.1.3 Related Work 52
3.2 Methods 56
3.2.1 Study Design 56
3.2.2 Dataset 59
3.2.3 Instance Segmentation Framework 63
3.2.4 Tracking Framework 66
3.2.4.1 Tracker 66
3.2.4.2 Re-identification 68
3.2.5 Surgical Instrument Tip Detection 69
3.2.6 Arm-Indicator Recognition 71
3.2.7 Surgical Skill Prediction Model 71
3.3 Experiments and Results 78
3.3.1 Performance of Instance Segmentation Framework 78
3.3.2 Performance of Tracking Framework 82
3.3.3 Evaluation of Surgical Instruments Trajectory 83
3.3.4 Evaluation of Surgical Skill Prediction Model 86
3.4 Discussion 90
3.4.1 Research Significance 90
3.4.2 Limitations 92
3.5 Conclusion 96
Chapter 4 Summary and Future Works 97
4.1 Thesis Summary 97
4.2 Limitations and Future Works 98
Bibliography 100
Abstract in Korean 116
Acknowledgement 119Docto
Recent Developments in Smart Healthcare
Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of modelโs uncertainty, as well as
on the modelโs performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces