11 research outputs found

    Joint segmentation and classification of retinal arteries/veins from fundus images

    Full text link
    Objective Automatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation. Methods A convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree. Results The method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%. Conclusion The results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest. Significance The proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin

    Machine Learning Techniques, Detection and Prediction of Glaucomaโ€“ A Systematic Review

    Get PDF
    Globally, glaucoma is the most common factor in both permanent blindness and impairment. However, the majority of patients are unaware they have the condition, and clinical practise continues to face difficulties in detecting glaucoma progression using current technology. An expert ophthalmologist examines the retinal portion of the eye to see how the glaucoma is progressing. This method is quite time-consuming, and doing it manually takes more time. Therefore, using deep learning and machine learning techniques, this problem can be resolved by automatically diagnosing glaucoma. This systematic review involved a comprehensive analysis of various automated glaucoma prediction and detection techniques. More than 100 articles on Machine learning (ML) techniques with understandable graph and tabular column are reviewed considering summery, method, objective, performance, advantages and disadvantages. In the ML techniques such as support vector machine (SVM), and K-means. Fuzzy c-means clustering algorithm are widely used in glaucoma detection and prediction. Through the systematic review, the most accurate technique to detect and predict glaucoma can be determined which can be utilized for future betterment

    Deep learning model for glaucoma diagnosis and its stages classification based on fundus images

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜ํ•™๊ณผ, 2019. 2. ๊น€ํ™๊ธฐ.Abstract Introduction: This study is concerned with an ensemble method of convolutional neural networks for automatically screening tests for glaucoma and classifying the severity of glaucoma based on fundus photographs. In order to automate the glaucoma screening and classifying severity stages, we defined and trained 48 convolutional neural network models with different characteristics. Finally, the final readings were obtained through the ensemble method proposed in this study from the models in which the study has been finished and their performance was evaluated. Methods: In this study, 4,445 fundus photographs from 2,801 patients were collected for the training of the convolutional neural network model. The collected fundus photographs were classified into a normal group (unaffected control class) and a glaucoma group by 4 ophthalmology and glaucoma specialists, and the glaucoma group was further divided into an early-stage glaucoma class and a late-stage glaucoma class by referring to the mean deviation (MD) of visual field test results. At this time, the mean deviation value of -6dB or less was classified as a late-stage glaucoma class. Also, up to one fundus photograph was used per side, left and right, for each patient. Out of the all fundus photographs, 3,460 photographs of 2,204 people were used to train the convolutional neural network model, except for the photographs with poor image quality and the ones without 100% agreement on the grade of glaucoma by 4 specialists. The performance of the model was evaluated using the accuracy, sensitivity, specificity, and area under the receiver operating characteristic (AUROC). At this time, the performance of the proposed ensemble method in this study was compared with InceptionNet-v3 as a baseline model. The performance evaluation results of the two methods were tested using the Shapiro-Wilk normality test and the paired t-test was used to test the statistical significance of the performance differences between the two methods. Results: The performance of the convolutional neural network ensemble method proposed in this study was evaluated separately, one related to the glaucoma screening test and one with the classification of glaucoma severity. The accuracy of the glaucoma screening test was 96.62% (95% confidence interval [CI], 95.5 ~ 97.8%) in the ensemble method. On the other hand, the reference model using one InceptionNet-v3 model showed 93.9% (95% CI, 92.6 ~ 95.2%). The difference in performances between the reference model and the ensemble method for glaucoma screening test accuracy was tested for statistical significance by paired t-test and the result showed that the difference of accuracy was statistically significant with the p-value of 0.000425. In terms of AUROC, the ensemble method showed 0.994 (95% CI, 0.990 ~ 0.997), and the reference model using one InceptionNet-v3 model showed 0.977 (95% CI, 0.969 ~ 0.986). The difference in performances between the reference model and the ensemble method for AUROC of glaucoma screening test was tested for statistical significance by paired t-test and the result showed that the difference of accuracy was statistically significant with p-value of 0.000966. We confirmed that the ensemble method proposed in this study has higher and more stable accuracy and AUROC for glaucoma screening compared to the reference model. In terms of accuracy of severity classification of glaucoma, the ensemble method showed 87.7% (95% CI, 85.9 ~ 89.7%), and the reference model using one InceptionNet-v3 model showed 82.3% (95% CI, 80.2 ~ 84.1%). The difference in accuracy between the reference model and the ensemble method for glaucoma screening test was tested for statistical significance by paired t-test and the result showed that the result was statistically significant with the p-value of 0.002902. In terms of average AUROC, the ensemble method showed 0.975 (95% CI, 0.967 ~ 0.983), and the reference model using one InceptionNet-v3 model showed 0.938 (95% CI, 0.926 ~ 0.949). The difference in performance between the reference model and the ensemble method for average AUROC of glaucoma screening test was tested for statistical significance by paired t-test and the result showed that the result was statistically significant with p-value of 0.000093. We confirmed that the ensemble method proposed in this study has higher and more stable accuracy and AUROC for glaucoma severity classification compared to the reference model. Conclusions: The proposed ensemble method in this study using multiple convolutional neural networks, shows superior and more stable performance compared to the conventional methods in glaucoma screening test and automating severity classification based on fundus photographs. The results of this study are a clinical decision support system (CDSS) based on artificial intelligence, which can be used in various fields by installing or connecting with the currently widely used fundus camera. By using the results of this study in the fundus camera and utilizing in health check-up centers or ophthalmology clinics, it can improve the efficiency and accuracy of the reading of the fundus photograph results and focus on the second reading by the specialist with its time efficiency, ultimately obtaining more economical and accurate screening results. In addition, if the medical service utilizing the results of the present study is used more actively, the possibility of early diagnosis of potential glaucoma patients can be increased, the medical treatment expenses of the glaucoma patients can be improved, and the related medical expenses can be reduced.์„œ๋ก : ๋ณธ ์—ฐ๊ตฌ๋Š” ์•ˆ์ €์˜์ƒ์„ ๋ฐ”ํƒ•์œผ๋กœ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์™€ ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„๋ฅผ ์ž๋™์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ธฐ ์œ„ํ•œ ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง์˜ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์— ๊ด€ํ•œ ๊ฒƒ์ด๋‹ค. ๋…น๋‚ด์žฅ์˜ ์„ ๋ณ„๊ฒ€์‚ฌ์™€ ์ค‘์ฆ๋„ ๋“ฑ๊ธ‰ํ™”๋ฅผ ์ž๋™ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ์„œ๋กœ ๋‹ค๋ฅธ ํŠน์„ฑ์„ ๊ฐ–๋Š” 48๊ฐœ์˜ ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ์ •์˜ํ•˜๊ณ  ํ›ˆ๋ จํ–ˆ๋‹ค. ํ•™์Šต์„ ์™„๋ฃŒํ•œ ๋ชจ๋“  ๋ชจ๋ธ์€ ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๋Š” ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด์„œ ์ตœ์ข… ํŒ๋… ๊ฒฐ๊ณผ๋ฅผ ๋„์ถœํ•˜์˜€๊ณ , ๊ทธ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋ฐฉ๋ฒ•: ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์˜ ํ›ˆ๋ จ์„ ์œ„ํ•ด 2,801๋ช…์˜ ํ™˜์ž๋กœ๋ถ€ํ„ฐ ์ธก์ •ํ•œ 4,445์žฅ์˜ ์•ˆ์ €์˜์ƒ์„ ์ˆ˜์ง‘ํ•˜์˜€๋‹ค. ์ˆ˜์ง‘ํ•œ ์•ˆ์ €์˜์ƒ์€ 4๋ช…์˜ ๋…น๋‚ด์žฅ ์ „๋ฌธ์˜๊ฐ€ ์ •์ƒ ์ง‘๋‹จ๊ณผ ๋…น๋‚ด์žฅ ์ง‘๋‹จ์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ณ , ๋…น๋‚ด์žฅ ์ง‘๋‹จ์€ ์‹œ์•ผ๊ฒ€์‚ฌ ๊ฒฐ๊ณผ์˜ ํ‰๊ท  ํŽธ์ฐจ (Mean Deviation, MD)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์ดˆ๊ธฐ ๋…น๋‚ด์žฅ ์ง‘๋‹จ๊ณผ ์ค‘์ฆ ๋…น๋‚ด์žฅ ์ง‘๋‹จ์œผ๋กœ ์„ธ๋ถ„ํ™”ํ•˜์˜€๋‹ค. ์ด๋•Œ, ํ‰๊ท  ํŽธ์ฐจ๊ฐ€ -6dB ์ดํ•˜๋ฅผ ์ค‘์ฆ ๋…น๋‚ด์žฅ ์ง‘๋‹จ์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ํ™˜์ž 1๋ช…์œผ๋กœ๋ถ€ํ„ฐ ์ขŒ, ์šฐ ๊ฐ๊ฐ ์ตœ๋Œ€ 1์žฅ์”ฉ์˜ ์•ˆ์ €์˜์ƒ์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์ „์ฒด ์•ˆ์ €์˜์ƒ ์ค‘์—์„œ ์˜์ƒ ํ’ˆ์งˆ์ด ์—ด์•…ํ•œ ๊ฒƒ๊ณผ 4๋ช… ๋…น๋‚ด์žฅ ์ „๋ฌธ์˜์˜ ๋“ฑ๊ธ‰ ํŒ์ • ๊ฒฐ๊ณผ๊ฐ€ 100% ์ผ์น˜ํ•˜์ง€ ์•Š์€ ์˜์ƒ์„ ์ œ์™ธํ•œ 2,204๋ช…์˜ 3,460์žฅ์„ ๊ฐ€์ง€๊ณ  ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜์˜€๋‹ค. ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์€ ์ •ํ™•๋„, ๋ฏผ๊ฐ๋„, ํŠน์ด๋„, AUROC(Area Under the Receiver Operating Characteristic)์„ ํ‰๊ฐ€ ์ง€ํ‘œ๋กœ ์‚ผ์•˜๋‹ค. ์ด๋•Œ, InceptionNet-v3๋ฅผ ๊ธฐ์ค€ ๋ชจ๋ธ๋กœ ํ•˜๊ณ , ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•๊ณผ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•˜์˜€๋‹ค. ๋‘ ๋ฐฉ๋ฒ•์˜ ์„ฑ๋Šฅํ‰๊ฐ€ ๊ฒฐ๊ณผ๋ฅผ Shapiro-Wilk normality test๋กœ ์ •๊ทœ์„ฑ ๊ฒ€์ •์„ ํ•˜์˜€์œผ๋ฉฐ, paired t-test๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‘ ๋ฐฉ๋ฒ•์˜ ์„ฑ๋Šฅ ์ฐจ์ด์— ๋Œ€ํ•œ ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ๊ฒ€์ •ํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ: ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•œ ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์€ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์— ๊ด€ํ•œ ๊ฒƒ๊ณผ ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜์— ๊ด€ํ•œ ๊ฒƒ์œผ๋กœ ๋ถ„๋ฆฌํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์˜ ์ •ํ™•๋„ ์ธก๋ฉด์—์„œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์€ 96.6% (95% confidence interval [CI], 95.5 ~ 97.8%)๋ฅผ ๋ณด์˜€๋‹ค. ๋ฐ˜๋ฉด, InceptionNet-v3 ๋ชจ๋ธ ํ•œ ๊ฐœ๋ฅผ ์‚ฌ์šฉํ•œ ๊ธฐ์ค€ ๋ชจ๋ธ์€ 93.9% (95% CI, 92.6 ~ 95.2%)๋ฅผ ๋ณด์˜€๋‹ค. ๊ธฐ์ค€ ๋ชจ๋ธ๊ณผ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์˜ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ ์ •ํ™•๋„์— ๋Œ€ํ•œ ์„ฑ๋Šฅ ์ฐจ์ด๋Š” paired t-test๋ฅผ ํ†ตํ•ด ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ๊ฒ€์ •ํ•˜์˜€๊ณ , ๊ทธ ๊ฒฐ๊ณผ๋Š” p-value 0.000425๋กœ ์ •ํ™•๋„์˜ ์ฐจ์ด๊ฐ€ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•จ์„ ๋ฐํ˜”๋‹ค. AUROC ์ธก๋ฉด์—์„œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์€ 0.994 (95% CI, 0.990 ~ 0.997)๋ฅผ ๋ณด์˜€์œผ๋ฉฐ, InceptionNet-v3 ๋ชจ๋ธ ํ•œ ๊ฐœ๋ฅผ ์‚ฌ์šฉํ•œ ๊ธฐ์ค€ ๋ชจ๋ธ์€ 0.977 (95% CI, 0.969 ~ 0.986)๋ฅผ ๋ณด์˜€๋‹ค. ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์— ์žˆ์–ด์„œ ๊ธฐ์ค€ ๋ชจ๋ธ๊ณผ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์˜ AUROC์— ๋Œ€ํ•œ ์„ฑ๋Šฅ ์ฐจ์ด๋Š” ์—ญ์‹œ paired t-test๋ฅผ ํ†ตํ•œ ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ๊ฒ€์ •ํ•˜์˜€๊ณ , ๊ฒฐ๊ณผ๋Š” p-value 0.000966์œผ๋กœ AUROC์˜ ์ฐจ์ด๊ฐ€ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•จ์„ ๋ฐํ˜”๋‹ค. ์ด๋กœ์จ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์—์„œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์ด ์ •ํ™•๋„์™€ AUROC ์ธก๋ฉด์—์„œ ๋” ๋†’๊ณ  ์•ˆ์ •์ ์ธ ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜์˜ ์ •ํ™•๋„ ์ธก๋ฉด์—์„œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์€ 87.7% (95% CI, 85.9 ~ 89.7%)๋ฅผ ๋ณด์˜€๊ณ , InceptionNet-v3 ๋ชจ๋ธ ํ•œ ๊ฐœ๋ฅผ ์‚ฌ์šฉํ•œ ๊ธฐ์ค€ ๋ชจ๋ธ์€ 82.3% (95% CI, 80.2 ~ 84.1%)๋ฅผ ๋ณด์˜€๋‹ค. ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜์— ์žˆ์–ด์„œ ๊ธฐ์ค€ ๋ชจ๋ธ๊ณผ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์˜ ์ •ํ™•๋„ ์ฐจ์ด๋Š” paired t-test๋ฅผ ํ†ตํ•ด ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ๊ฒ€์ •ํ•˜์˜€๊ณ , ๊ทธ ๊ฒฐ๊ณผ๋Š” p-value 0.002902๋กœ ๊ทธ ์ฐจ์ด๊ฐ€ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•จ์„ ๋ฐํ˜”๋‹ค. ํ‰๊ท  AUROC ์ธก๋ฉด์—์„œ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์€ 0.975 (95% CI, 0.967 ~ 0.983)๋ฅผ ๋ณด์˜€์œผ๋ฉฐ, InceptionNet-v3 ๋ชจ๋ธ ํ•œ ๊ฐœ๋ฅผ ์‚ฌ์šฉํ•œ ๊ธฐ์ค€ ๋ชจ๋ธ์€ 0.938 (95% CI, 0.926 ~ 0.949)์„ ๋ณด์˜€๋‹ค. ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜์— ์žˆ์–ด์„œ ํ‰๊ท  AUROC์— ๋Œ€ํ•œ ๊ธฐ์ค€ ๋ชจ๋ธ๊ณผ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์˜ ์„ฑ๋Šฅ ์ฐจ์ด ์—ญ์‹œ paired t-test๋ฅผ ํ†ตํ•ด ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ๊ฒ€์ •ํ•˜์˜€๊ณ , ๊ทธ ๊ฒฐ๊ณผ๋Š” p-value 0.000093์œผ๋กœ ๊ทธ ์ฐจ์ด๊ฐ€ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•จ์„ ๋ฐํ˜”๋‹ค. ์ด๋กœ์จ ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜์—์„œ๋„ ์•™์ƒ๋ธ” ๋ฐฉ๋ฒ•์ด ์ •ํ™•๋„์™€ AUROC ์ธก๋ฉด์—์„œ ๋” ๋†’๊ณ  ์•ˆ์ •์ ์ธ ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ฒฐ๋ก : ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง์„ ์•™์ƒ๋ธ” ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•ˆ์ €์˜์ƒ์„ ๋ฐ”ํƒ•์œผ๋กœ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ์™€ ์ค‘์ฆ๋„ ๋ถ„๋ฅ˜๋ฅผ ์ž๋™ํ™”ํ•˜๋Š” ๋ฐ ์žˆ์–ด์„œ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์šฐ์ˆ˜ํ•˜๊ณ  ์•ˆ์ •์ ์ธ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๊ธฐ์ˆ ์„ ๋ฐ”ํƒ•์œผ๋กœ ํ•˜๋Š” ์ž„์ƒ ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ(Clinical Decision Support System, CDSS) ์†Œํ”„ํŠธ์›จ์–ด๋กœ, ํ˜„์žฌ ๋„๋ฆฌ ๋ณด๊ธ‰๋œ ์•ˆ์ €์ดฌ์˜๊ธฐ์— ํƒ‘์žฌ ๋˜๋Š” ์—ฐ๋™ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์•ˆ์ €์ดฌ์˜๊ธฐ์— ๋ณธ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํƒ‘์žฌํ•˜์—ฌ ๊ฑด๊ฐ•๊ฒ€์ง„์„ผํ„ฐ๋‚˜ ์•ˆ๊ณผ ์ง„๋ฃŒํ˜„์žฅ์—์„œ ํ™œ์šฉํ•œ๋‹ค๋ฉด, ์•ˆ์ €์ดฌ์˜ ๊ฒฐ๊ณผ์˜ ํŒ๋… ํšจ์œจ๊ณผ ์ •ํ™•์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๊ณ , ์ด์— ๋”ฐ๋ฅธ ์‹œ๊ฐ„์  ์ด๋“์„ ์ „๋ฌธ์˜์˜ 2์ฐจ ํŒ๋…์— ํ• ์• ํ•จ์œผ๋กœ์จ ๋ณด๋‹ค ๊ฒฝ์ œ์ ์ด๊ณ  ์ •ํ™•ํ•œ ๊ฒ€์ง„ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ™œ์šฉํ•œ ์˜๋ฃŒ์„œ๋น„์Šค๊ฐ€ ํ™œ์„ฑํ™”๋œ๋‹ค๋ฉด, ์ž ์žฌ์  ๋…น๋‚ด์žฅ ํ™˜์ž์— ๋Œ€ํ•œ ์กฐ๊ธฐ์ง„๋‹จ์˜ ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๊ณ , ์ด์— ๋”ฐ๋ฅธ ๋…น๋‚ด์žฅ ํ™˜์ž์˜ ์น˜๋ฃŒ ํšจ๊ณผ ํ–ฅ์ƒ๊ณผ ๊ด€๋ จ ์˜๋ฃŒ๋น„์šฉ ์ง€์ถœ์„ ์ ˆ๊ฐํ•  ์ˆ˜ ์žˆ๋‹ค.์ดˆ๋ก --- i ๋ชฉ์ฐจ --- iv ํ‘œ๋ชฉ์ฐจ --- vi ๊ทธ๋ฆผ๋ชฉ์ฐจ --- vii ์ œ 1 ์žฅ ์„œ๋ก  --- 1 ์ œ 1 ์ ˆ ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ๊ณผ ์˜์˜ --- 1 ์ œ 2 ์ ˆ ์—ฐ๊ตฌ์˜ ํ•„์š”์„ฑ --- 5 ์ œ 3 ์ ˆ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ ๋ฐ ๋‚ด์šฉ --- 14 ์ œ 2 ์žฅ ์—ฐ๊ตฌ ๋ฐฉ๋ฒ• --- 15 ์ œ 1 ์ ˆ ์•ˆ์ €์˜์ƒ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค ๊ตฌ์ถ• --- 15 1. ๋ฐ์ดํ„ฐ ์ธก์ • ํ™˜๊ฒฝ ๋ฐ ๋ฐฉ๋ฒ• --- 15 2. ์•ˆ์ €์˜์ƒ ๋ ˆ์ด๋ธ”๋ง ๋ฐฉ๋ฒ• --- 16 3. ์•ˆ์ €์˜์ƒ ๊ณ ์œ  ์‹๋ณ„์ž ํ• ๋‹น ๋ฐฉ๋ฒ• --- 18 ์ œ 2 ์ ˆ ์•ˆ์ €์˜์ƒ ์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ• --- 19 1. ์•ˆ์ €์˜์ƒ ์ฒ˜๋ฆฌ ๊ฐœ์š” --- 19 2. ์•ˆ์ €์˜์ƒ ์ „์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ• --- 20 3. ์•ˆ์ €์˜์ƒ ํ›„์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ• --- 20 ์ œ 3 ์ ˆ ํ•ฉ์„ฑ๊ณฑ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ --- 22 1. InceptionNet-v3๊ณผ Inception-ResNet-v2 ๋ชจ๋ธ --- 22 2. ์ •์ƒ๊ณผ ๋…น๋‚ด์žฅ ์„ ๋ณ„์„ ์œ„ํ•œ ์ด์ง„ ๋ถ„๋ฅ˜ CNN ๋ชจ๋ธ --- 23 3. ๋…น๋‚ด์žฅ ์ค‘๋“ฑ๋„ ๋“ฑ๊ธ‰ํ™”๋ฅผ ์œ„ํ•œ ์‚ผ์ง„ ๋ถ„๋ฅ˜ CNN ๋ชจ๋ธ --- 23 ์ œ 4 ์ ˆ ๊ธฐ๊ณ„ํ•™์Šต ์‹คํ—˜ ์„ค๊ณ„ ๋ฐ ์•™์ƒ๋ธ” ์ „๋žต --- 26 ์ œ 5 ์ ˆ ๊ธฐ๊ณ„ํ•™์Šต ์‹คํ—˜ ํ™˜๊ฒฝ --- 32 ์ œ 6 ์ ˆ ๋ชจ๋ธ ํ‰๊ฐ€ ๋ฐฉ๋ฒ• --- 34 ์ œ 3์žฅ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ --- 35 ์ œ 1 ์ ˆ ์•ˆ์ €์˜์ƒ DB ๊ตฌ์ถ• ๊ฒฐ๊ณผ --- 35 ์ œ 2 ์ ˆ ๋”ฅ๋Ÿฌ๋‹ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๊ตฌ์„ฑ --- 38 ์ œ 3 ์ ˆ ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ ๋ชจ๋ธ ํ•™์Šต๊ฒฐ๊ณผ --- 40 1. ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ ๊ฐœ๋ณ„ ๋ชจ๋ธ ํ•™์Šต๊ฒฐ๊ณผ --- 40 2. ๋…น๋‚ด์žฅ ์„ ๋ณ„๊ฒ€์‚ฌ ์•™์ƒ๋ธ” ํ•™์Šต๊ฒฐ๊ณผ --- 46 ์ œ 4 ์ ˆ ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋“ฑ๊ธ‰ํ™” ๋ชจ๋ธ ํ•™์Šต๊ฒฐ๊ณผ --- 52 1. ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋“ฑ๊ธ‰ํ™” ๊ฐœ๋ณ„ ๋ชจ๋ธ ํ•™์Šต๊ฒฐ๊ณผ --- 52 2. ๋…น๋‚ด์žฅ ์ค‘์ฆ๋„ ๋“ฑ๊ธ‰ํ™” ์•™์ƒ๋ธ” ํ•™์Šต๊ฒฐ๊ณผ --- 58 ์ œ 4 ์žฅ ๊ณ ์ฐฐ --- 66 ์ œ 1 ์ ˆ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋…น๋‚ด์žฅ ํŒ๋… ์‹œ์Šคํ…œ ๊ฐœ๋ฐœ --- 66 ์ œ 2 ์ ˆ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋…น๋‚ด์žฅ ํŒ๋… ์‹œ์Šคํ…œ ํ™œ์šฉ ๋ฐฉ์•ˆ --- 67 ์ œ 3 ์ ˆ ๊ธฐ๋Œ€ํšจ๊ณผ --- 68 ์ œ 4 ์ ˆ ์—ฐ๊ตฌ์˜ ์ œํ•œ์  --- 69 ์ œ 5 ์ ˆ ๊ฒฐ๋ก  --- 70 ์ฐธ๊ณ ๋ฌธํ—Œ --- 72 Abstract --- 74Docto

    Aspectos do rastreamento do glaucoma auxiliados por tรฉcnicas automatizadas em imagens com menor qualidade do disco รณptico

    Get PDF
    O glaucoma รฉ uma neuropatia รณptica cuja progressรฃo pode levar a cegueira. Representa a principal causa de perda visual de carรกter irreversรญvel em todo o mundo para homens e mulheres. A detecรงรฃo precoce atravรฉs de programas de rastreamento feita por especialistas รฉ baseada nas caracterรญsticas do nervo รณptico, em biomarcadores oftalmolรณgicos (destacando-se a pressรฃo ocular) e exames subsidiรกrios, com destaque ao campo visual e OCT. Apรณs o reconhecimento dos casos รฉ feito o tratamento com finalidade de estacionar a progressรฃo da doenรงa e melhorar a qualidade de vida dos pacientes. Contudo, estes programas tรชm limitaรงรตes, principalmente em locais mais distantes dos grandes centros de tratamento especializado, insuficiรชncia de equipamentos bรกsicos e pessoal especializado para oferecer o rastreamento a toda a populaรงรฃo, faltam meios para locomoรงรฃo a estes centros, desinformaรงรฃo e desconhecimento da doenรงa, alรฉm de caracterรญsticas de progressรฃo assintomรกtica da doenรงa. Esta tese aborda soluรงรตes inovadoras que podem contribuir para a automaรงรฃo do rastreamento do glaucoma utilizando aparelhos portรกteis e mais baratos, considerando as necessidades reais dos clรญnicos durante o rastreamento. Para isso foram realizadas revisรตes sistemรกticas sobre os mรฉtodos e equipamentos para apoio ร  triagem automรกtica do glaucoma e os mรฉtodos de aprendizado profundo para a segmentaรงรฃo e classificaรงรฃo aplicรกveis. Tambรฉm foi feito um levantamento de questรตes mรฉdicas relativas ร  triagem do glaucoma e associรก-las ao campo da inteligรชncia artificial, para dar mais sentido as metodologias automatizadas. Alรฉm disso, foi criado um banco de dados privado, com vรญdeos e imagens de retina adquiridos por um smartphone acoplado a lente de baixo custo para o rastreamento do glaucoma e avaliado com mรฉtodos do estado da arte. Foram avaliados e analisados mรฉtodos de detecรงรฃo automรกtica de glaucoma utilizando mรฉtodos de aprendizado profundo de segmentaรงรฃo do disco e do copo รณptico em banco de dados pรบblicos de imagens de retina. Finalmente, foram avaliadas tรฉcnicas de mosaico e de detecรงรฃo da cabeรงa do nervo รณptico em imagens de baixa qualidade obtidas para prรฉ-processamento de imagens adquiridas por smartphones acoplados a lente de baixo custo.Glaucoma is an optic neuropathy whose progression can lead to blindness. It represents the leading cause of irreversible visual loss worldwide for men and women. Early detection through screening programs carried out by specialists is based on the characteristics of the optic papilla, ophthalmic biomarkers (especially eye pressure), and subsidiary exams, emphasizing the visual field and optical coherence tomography (OCT). After recognizing the cases, the treatment is carried out to stop the progression of the disease and improve the quality of patientsโ€™ life. However, these screening programs have limitations, particularly in places further away from the sizeable, specialized treatment centers, due to the lack of essential equipment and technical personnel to offer screening to the entire population, due to the lack of means of transport to these centers, due to lack of information and lack of knowledge about the disease, considering the characteristics of asymptomatic progression of the disease. This thesis aims to develop innovative approaches to contribute to the automation of glaucoma screening using portable and cheaper devices, considering the real needs of clinicians during screening. For this, systematic reviews were carried out on the methods and equipment to support automatic glaucoma screening, and the applicable deep learning methods for segmentation and classification. A survey of medical issues related to glaucoma screening was carried out and associated with the field of artificial intelligence to make automated methodologies more effective. In addition, a private dataset was created, with videos and retina images acquired using a low-cost lens-coupled cell phone, for glaucoma screening and evaluated with state-of-the-art methods. Methods of automatic detection of glaucoma using deep learning methods of segmentation of the disc and optic cup were evaluated and analyzed in a public database of retinal images. In the case of deep learning classification methods, these were evaluated in public databases of retina images and in a private database with low-cost images. Finally, mosaic and object detection techniques were evaluated in low-quality images obtained for pre-processing images acquired by cell phones coupled with low-cost lenses
    corecore