31 research outputs found

    Challenges of 3D Surface Reconstruction in Capsule Endoscopy

    Full text link
    There are currently many challenges specific to three-dimensional (3D) surface reconstruction using capsule endoscopy (CE) images. There are also challenges specific to viewing the content of CE reconstructed 3D surfaces for bowel disease diagnosis purposes. In this preliminary work, the author focuses on the latter and discusses the effects such challenges have on the content of reconstructed 3D surfaces from CE images. Discussions are divided into two parts. The first part focuses on the comparison of the content of 3D surfaces reconstructed using both preprocessed and non-preprocessed CE images. The second part focuses on the comparison of the content of 3D surfaces viewed at the same azimuth angles and different elevation angles of the line of sight. Experiments-based conclusion suggests 3D printing as a solution to the line of sight and 2D screen visual restrictions.Comment: 5 pages, 3 figure

    2D Reconstruction of Small Intestine's Interior Wall

    Full text link
    Examining and interpreting of a large number of wireless endoscopic images from the gastrointestinal tract is a tiresome task for physicians. A practical solution is to automatically construct a two dimensional representation of the gastrointestinal tract for easy inspection. However, little has been done on wireless endoscopic image stitching, let alone systematic investigation. The proposed new wireless endoscopic image stitching method consists of two main steps to improve the accuracy and efficiency of image registration. First, the keypoints are extracted by Principle Component Analysis and Scale Invariant Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable keypoints. Second, the optimal transformation parameters obtained from first step are fed to the Normalised Mutual Information (NMI) algorithm as an initial solution. With modified Marquardt-Levenberg search strategy in a multiscale framework, the NMI can find the optimal transformation parameters in the shortest time. The proposed methodology has been tested on two different datasets - one with real wireless endoscopic images and another with images obtained from Micro-Ball (a new wireless cubic endoscopy system with six image sensors). The results have demonstrated the accuracy and robustness of the proposed methodology both visually and quantitatively.Comment: Journal draf

    Automatic Small Bowel Tumor Diagnosis by Using Multi-Scale Wavelet-Based Analysis in Wireless Capsule Endoscopy Images

    Get PDF
    BACKGROUND: Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. METHOD: The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. RESULTS: The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice

    Komputerowa analiza obrazรณw z endoskopu bezprzewodowego dla diagnostyki medycznej.

    Get PDF
    Jednym z badaล„ medycznych stosowanych w diagnostyce chorรณb przewodu pokarmowego jest bezprzewodowa endoskopia kapsuล‚kowa. Wynikiem badania jest film, ktรณrego interpretacja przeprowadzana przez lekarza wymaga duลผego skupienia uwagi, jest dล‚ugotrwaล‚a i mฤ™czฤ…ca. Wyniki interpretacji nie sฤ… powtarzalne โ€“ zaleลผฤ… od wiedzy i doล›wiadczenia konkretnego lekarza. Przedmiotem niniejszej monografii sฤ… opracowane przez autora metody numeryczne, ktรณrych celem jest analiza obrazรณw cyfrowych z endoskopu bezprzewodowego zwiฤ™kszajฤ…ce powtarzalnoล›ฤ‡, wiarygodnoล›ฤ‡ oraz obiektywizm diagnozy medycznej.Wireless capsule endoscopy is one of the medical tests used in diagnosis of gastrointestinal disorders. A result is a video of internal lumen of gastrointestinal tract which interpretation carried out by an expert gastroenterologist requires a lot of attention and is time consuming. The final diagnosis is rarely reproducible โ€“ it depends on the knowledge and experience of the diagnostic experience of the expert. The subject of this monograph is presentation and validation of novel algorithms for wireless endoscope video analysis whose purpose is to improve the reproducibility, reliability and objectivity of medical diagnosis

    ์ž„์ƒ์ˆ ๊ธฐ ํ–ฅ์ƒ์„ ์œ„ํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ• ์—ฐ๊ตฌ: ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ ์ง„๋‹จ ๋ฐ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ ๊ธฐ ํ‰๊ฐ€์— ์ ์šฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์˜์šฉ์ƒ์ฒด๊ณตํ•™์ „๊ณต, 2020. 8. ๊น€ํฌ์ฐฌ.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated. In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly. In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods. In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.๋ณธ ๋…ผ๋ฌธ์€ ์˜๋ฃŒ์ง„์˜ ์ž„์ƒ์ˆ ๊ธฐ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•˜๊ณ  ๋‹ค์Œ ๋‘ ๊ฐ€์ง€ ์‹ค๋ก€์— ๋Œ€ํ•ด ์ ์šฉํ•˜์—ฌ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ์œผ๋กœ ๊ด‘ํ•™ ์ง„๋‹จ ์‹œ, ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์šฉ์ข… ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๊ณ , ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ ๋Šฅ๋ ฅ ํ–ฅ์ƒ ์—ฌ๋ถ€๋ฅผ ๊ฒ€์ฆํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ ๊ฒ€์‚ฌ๋กœ ์•”์ข…์œผ๋กœ ์ฆ์‹ํ•  ์ˆ˜ ์žˆ๋Š” ์„ ์ข…๊ณผ ๊ณผ์ฆ์‹์„ฑ ์šฉ์ข…์„ ์ง„๋‹จํ•˜๋Š” ๊ฒƒ์€ ์ค‘์š”ํ•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ˜‘๋Œ€์—ญ ์˜์ƒ ๋‚ด์‹œ๊ฒฝ์œผ๋กœ ์ดฌ์˜ํ•œ ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์„ ํ•™์Šตํ•˜์—ฌ ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ž๋™ ๊ธฐ๊ณ„ํ•™์Šต (AutoML) ๋ฐฉ๋ฒ•์œผ๋กœ, ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ์— ์ตœ์ ํ™”๋œ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ตฌ์กฐ๋ฅผ ์ฐพ๊ณ  ์‹ ๊ฒฝ๋ง์˜ ๊ฐ€์ค‘์น˜๋ฅผ ํ•™์Šตํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ธฐ์šธ๊ธฐ-๊ฐ€์ค‘์น˜ ํด๋ž˜์Šค ํ™œ์„ฑํ™” ๋งตํ•‘ ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ๊ฐœ๋ฐœํ•œ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ฒฐ๊ณผ์˜ ํ™•๋ฅ ์  ๊ทผ๊ฑฐ๋ฅผ ์šฉ์ข… ์œ„์น˜์— ์‹œ๊ฐ์ ์œผ๋กœ ๋‚˜ํƒ€๋‚˜๋„๋ก ํ•จ์œผ๋กœ ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ์„ ๋•๋„๋ก ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ˆ™๋ จ๋„ ๊ทธ๋ฃน๋ณ„๋กœ ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜๊ฐ€ ์šฉ์ข… ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฒฐ๊ณผ๋ฅผ ์ฐธ๊ณ ํ•˜์˜€์„ ๋•Œ ์ง„๋‹จ ๋Šฅ๋ ฅ์ด ํ–ฅ์ƒ๋˜์—ˆ๋Š”์ง€ ๋น„๊ต ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๊ณ , ๋ชจ๋“  ๊ทธ๋ฃน์—์„œ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ง„๋‹จ ์ •ํ™•๋„๊ฐ€ ํ–ฅ์ƒ๋˜๊ณ  ์ง„๋‹จ ์‹œ๊ฐ„์ด ๋‹จ์ถ•๋˜์—ˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์—์„œ ์ˆ˜์ˆ ๋„๊ตฌ ์œ„์น˜ ์ถ”์  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํš๋“ํ•œ ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„ ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ˆ˜์ˆ ์ž์˜ ์ˆ™๋ จ๋„๋ฅผ ์ •๋Ÿ‰์ ์œผ๋กœ ํ‰๊ฐ€ํ•˜๋Š” ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์€ ์ˆ˜์ˆ ์ž์˜ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ™๋ จ๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•œ ์ฃผ์š”ํ•œ ์ •๋ณด์ด๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ž๋™ ์ˆ˜์ˆ ๋„๊ตฌ ์ถ”์  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€์œผ๋ฉฐ, ๋‹ค์Œ ๋‘๊ฐ€์ง€ ์„ ํ–‰์—ฐ๊ตฌ์˜ ํ•œ๊ณ„์ ์„ ๊ทน๋ณตํ•˜์˜€๋‹ค. ์ธ์Šคํ„ด์Šค ๋ถ„ํ•  (Instance Segmentation) ํ”„๋ ˆ์ž„์›์„ ๊ฐœ๋ฐœํ•˜์—ฌ ํ์ƒ‰ (Occlusion) ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜์˜€๊ณ , ์ถ”์ ๊ธฐ (Tracker)์™€ ์žฌ์‹๋ณ„ํ™” (Re-Identification) ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ๊ตฌ์„ฑ๋œ ์ถ”์  ํ”„๋ ˆ์ž„์›์„ ๊ฐœ๋ฐœํ•˜์—ฌ ๋™์˜์ƒ์—์„œ ์ถ”์ ํ•˜๋Š” ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์ข…๋ฅ˜๊ฐ€ ์œ ์ง€๋˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์˜ ํŠน์ˆ˜์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์„ ํš๋“ํ•˜๊ธฐ์œ„ํ•ด ์ˆ˜์ˆ ๋„๊ตฌ ๋ ์œ„์น˜์™€ ๋กœ๋ด‡ ํŒ”-์ธ๋””์ผ€์ดํ„ฐ (Arm-Indicator) ์ธ์‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ์€ ์˜ˆ์ธกํ•œ ์ˆ˜์ˆ ๋„๊ตฌ ๋ ์œ„์น˜์™€ ์ •๋‹ต ์œ„์น˜ ๊ฐ„์˜ ํ‰๊ท  ์ œ๊ณฑ๊ทผ ์˜ค์ฐจ, ๊ณก์„  ์•„๋ž˜ ๋ฉด์ , ํ”ผ์–ด์Šจ ์ƒ๊ด€๋ถ„์„์œผ๋กœ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์œผ๋กœ๋ถ€ํ„ฐ ์›€์ง์ž„ ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ๋ฐ˜์˜ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ™๋ จ๋„ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœํ•œ ํ‰๊ฐ€ ๋ชจ๋ธ์€ ๊ธฐ์กด์˜ Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) ํ‰๊ฐ€ ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•œ ์„ฑ๋Šฅ์„ ๋ณด์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ์˜๋ฃŒ์ง„์˜ ์ž„์ƒ์ˆ ๊ธฐ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ๊ณผ ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์— ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์„ ์ ์šฉํ•˜๊ณ  ๊ทธ ์œ ํšจ์„ฑ์„ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ํ–ฅํ›„์— ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ž„์ƒ์—์„œ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋Š” ์ง„๋‹จ ๋ฐ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์˜ ๋Œ€์•ˆ์ด ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.Chapter 1 General Introduction 1 1.1 Deep Learning for Medical Image Analysis 1 1.2 Deep Learning for Colonoscipic Diagnosis 2 1.3 Deep Learning for Robotic Surgical Skill Assessment 3 1.4 Thesis Objectives 5 Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7 2.1 Introduction 7 2.1.1 Background 7 2.1.2 Needs 8 2.1.3 Related Work 9 2.2 Methods 11 2.2.1 Study Design 11 2.2.2 Dataset 14 2.2.3 Preprocessing 17 2.2.4 Convolutional Neural Networks (CNN) 21 2.2.4.1 Standard CNN 21 2.2.4.2 Search for CNN Architecture 22 2.2.4.3 Searched CNN Training 23 2.2.4.4 Visual Explanation 24 2.2.5 Evaluation of CNN and Endoscopist Performances 25 2.3 Experiments and Results 27 2.3.1 CNN Performance 27 2.3.2 Results of Visual Explanation 31 2.3.3 Endoscopist with CNN Performance 33 2.4 Discussion 45 2.4.1 Research Significance 45 2.4.2 Limitations 47 2.5 Conclusion 49 Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50 3.1 Introduction 50 3.1.1 Background 50 3.1.2 Needs 51 3.1.3 Related Work 52 3.2 Methods 56 3.2.1 Study Design 56 3.2.2 Dataset 59 3.2.3 Instance Segmentation Framework 63 3.2.4 Tracking Framework 66 3.2.4.1 Tracker 66 3.2.4.2 Re-identification 68 3.2.5 Surgical Instrument Tip Detection 69 3.2.6 Arm-Indicator Recognition 71 3.2.7 Surgical Skill Prediction Model 71 3.3 Experiments and Results 78 3.3.1 Performance of Instance Segmentation Framework 78 3.3.2 Performance of Tracking Framework 82 3.3.3 Evaluation of Surgical Instruments Trajectory 83 3.3.4 Evaluation of Surgical Skill Prediction Model 86 3.4 Discussion 90 3.4.1 Research Significance 90 3.4.2 Limitations 92 3.5 Conclusion 96 Chapter 4 Summary and Future Works 97 4.1 Thesis Summary 97 4.2 Limitations and Future Works 98 Bibliography 100 Abstract in Korean 116 Acknowledgement 119Docto

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of modelโ€™s uncertainty, as well as on the modelโ€™s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces
    corecore