16,592 research outputs found

    Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies

    Full text link
    [EN] Background and objective:Glaucoma is the leading cause of blindness worldwide. Many studies based on fundus image and optical coherence tomography (OCT) imaging have been developed in the literature to help ophthalmologists through artificial-intelligence techniques. Currently, 3D spectral-domain optical coherence tomography (SD-OCT) samples have become more important since they could enclose promising information for glaucoma detection. To analyse the hidden knowledge of the 3D scans for glaucoma detection, we have proposed, for the first time, a deep-learning methodology based on leveraging the spatial dependencies of the features extracted from the B-scans. Methods:The experiments were performed on a database composed of 176 healthy and 144 glaucomatous SD-OCT volumes centred on the optic nerve head (ONH). The proposed methodology consists of two well-differentiated training stages: a slide-level feature extractor and a volume-based predictive model. The slide-level discriminator is characterised by two new, residual and attention, convolutional modules which are combined via skip-connections with other fine-tuned architectures. Regarding the second stage, we first carried out a data-volume conditioning before extracting the features from the slides of the SD-OCT volumes. Then, Long Short-Term Memory (LSTM) networks were used to combine the recurrent dependencies embedded in the latent space to provide a holistic feature vector, which was generated by the proposed sequential-weighting module (SWM). Results:The feature extractor reports AUC values higher than 0.93 both in the primary and external test sets. Otherwise, the proposed end-to-end system based on a combination of CNN and LSTM networks achieves an AUC of 0.8847 in the prediction stage, which outperforms other state-of-the-art approaches intended for glaucoma detection. Additionally, Class Activation Maps (CAMs) were computed to highlight the most interesting regions per B-scan when discerning between healthy and glaucomatous eyes from raw SD-OCT volumes. Conclusions:The proposed model is able to extract the features from the B-scans of the volumes and combine the information of the latent space to perform a volume-level glaucoma prediction. Our model, which combines residual and attention blocks with a sequential weighting module to refine the LSTM outputs, surpass the results achieved from current state-of-the-art methods focused on 3D deep-learning architectures.The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used here.This work has been funded by GALAHAD project [H2020-ICT-2016-2017, 732613], SICAP project (DPI2016-77869-C2-1-R) and GVA through project PROMETEO/2019/109. The work of Gabriel García has been supported by the State Research Spanish Agency PTA2017-14610-I.García-Pardo, JG.; Colomer, A.; Naranjo Ornedo, V. (2021). Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies. Computer Methods and Programs in Biomedicine. 200:1-16. https://doi.org/10.1016/j.cmpb.2020.105855S116200Weinreb, R. N., & Khaw, P. T. (2004). Primary open-angle glaucoma. The Lancet, 363(9422), 1711-1720. doi:10.1016/s0140-6736(04)16257-0Jonas, J. B., Aung, T., Bourne, R. R., Bron, A. M., Ritch, R., & Panda-Jonas, S. (2018). Glaucoma – Authors’ reply. The Lancet, 391(10122), 740. doi:10.1016/s0140-6736(18)30305-2Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Huang, D., Swanson, E. A., Lin, C. P., Schuman, J. S., Stinson, W. G., Chang, W., … Fujimoto, J. G. (1991). Optical Coherence Tomography. Science, 254(5035), 1178-1181. doi:10.1126/science.1957169Medeiros, F. A., Zangwill, L. M., Alencar, L. M., Bowd, C., Sample, P. A., Susanna, R., & Weinreb, R. N. (2009). Detection of Glaucoma Progression with Stratus OCT Retinal Nerve Fiber Layer, Optic Nerve Head, and Macular Thickness Measurements. Investigative Opthalmology & Visual Science, 50(12), 5741. doi:10.1167/iovs.09-3715Sinthanayothin, C., Boyce, J. F., Williamson, T. H., Cook, H. L., Mensah, E., Lal, S., & Usher, D. (2002). Automated detection of diabetic retinopathy on digital fundus images. Diabetic Medicine, 19(2), 105-112. doi:10.1046/j.1464-5491.2002.00613.xWalter, T., Massin, P., Erginay, A., Ordonez, R., Jeulin, C., & Klein, J.-C. (2007). Automatic detection of microaneurysms in color fundus images. Medical Image Analysis, 11(6), 555-566. doi:10.1016/j.media.2007.05.001Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Bussel, I. I., Wollstein, G., & Schuman, J. S. (2013). OCT for glaucoma diagnosis, screening and detection of glaucoma progression. British Journal of Ophthalmology, 98(Suppl 2), ii15-ii19. doi:10.1136/bjophthalmol-2013-304326Varma, R., Steinmann, W. C., & Scott, I. U. (1992). Expert Agreement in Evaluating the Optic Disc for Glaucoma. Ophthalmology, 99(2), 215-221. doi:10.1016/s0161-6420(92)31990-6Jaffe, G. J., & Caprioli, J. (2004). Optical coherence tomography to detect and manage retinal disease and glaucoma. American Journal of Ophthalmology, 137(1), 156-169. doi:10.1016/s0002-9394(03)00792-xHood, D. C., & Raza, A. S. (2014). On improving the use of OCT imaging for detecting glaucomatous damage. British Journal of Ophthalmology, 98(Suppl 2), ii1-ii9. doi:10.1136/bjophthalmol-2014-305156Bizios, D., Heijl, A., Hougaard, J. L., & Bengtsson, B. (2010). Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT. Acta Ophthalmologica, 88(1), 44-52. doi:10.1111/j.1755-3768.2009.01784.xKim, S. J., Cho, K. J., & Oh, S. (2017). Development of machine learning models for diagnosis of glaucoma. PLOS ONE, 12(5), e0177726. doi:10.1371/journal.pone.0177726Medeiros, F. A., Jammal, A. A., & Thompson, A. C. (2019). From Machine to Machine. Ophthalmology, 126(4), 513-521. doi:10.1016/j.ophtha.2018.12.033An, G., Omodaka, K., Hashimoto, K., Tsuda, S., Shiga, Y., Takada, N., … Nakazawa, T. (2019). Glaucoma Diagnosis with Machine Learning Based on Optical Coherence Tomography and Color Fundus Images. Journal of Healthcare Engineering, 2019, 1-9. doi:10.1155/2019/4061313Fang, L., Cunefare, D., Wang, C., Guymer, R. H., Li, S., & Farsiu, S. (2017). Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomedical Optics Express, 8(5), 2732. doi:10.1364/boe.8.002732Pekala, M., Joshi, N., Liu, T. Y. A., Bressler, N. M., DeBuc, D. C., & Burlina, P. (2019). Deep learning based retinal OCT segmentation. Computers in Biology and Medicine, 114, 103445. doi:10.1016/j.compbiomed.2019.103445Barella, K. A., Costa, V. P., Gonçalves Vidotti, V., Silva, F. R., Dias, M., & Gomi, E. S. (2013). Glaucoma Diagnostic Accuracy of Machine Learning Classifiers Using Retinal Nerve Fiber Layer and Optic Nerve Data from SD-OCT. Journal of Ophthalmology, 2013, 1-7. doi:10.1155/2013/789129Vidotti, V. G., Costa, V. P., Silva, F. R., Resende, G. M., Cremasco, F., Dias, M., & Gomi, E. S. (2013). Sensitivity and Specificity of Machine Learning Classifiers and Spectral Domain OCT for the Diagnosis of Glaucoma. European Journal of Ophthalmology, 23(1), 61-69. doi:10.5301/ejo.5000183Xu, J., Ishikawa, H., Wollstein, G., Bilonick, R. A., Folio, L. S., Nadler, Z., … Schuman, J. S. (2013). Three-Dimensional Spectral-Domain Optical Coherence Tomography Data Analysis for Glaucoma Detection. PLoS ONE, 8(2), e55476. doi:10.1371/journal.pone.0055476Maetschke, S., Antony, B., Ishikawa, H., Wollstein, G., Schuman, J., & Garnavi, R. (2019). A feature agnostic approach for glaucoma detection in OCT volumes. PLOS ONE, 14(7), e0219126. doi:10.1371/journal.pone.0219126Ran, A. R., Cheung, C. Y., Wang, X., Chen, H., Luo, L., Chan, P. P., … Tham, C. C. (2019). Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. The Lancet Digital Health, 1(4), e172-e182. doi:10.1016/s2589-7500(19)30085-8De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Wang, X., Chen, H., Ran, A.-R., Luo, L., Chan, P. P., Tham, C. C., … Heng, P.-A. (2020). Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Medical Image Analysis, 63, 101695. doi:10.1016/j.media.2020.101695Ran, A. R., Shi, J., Ngai, A. K., Chan, W.-Y., Chan, P. P., Young, A. L., … Cheung, C. Y. (2019). Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans. Neurophotonics, 6(04), 1. doi:10.1117/1.nph.6.4.041110Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735Jiang, J., Liu, X., Liu, L., Wang, S., Long, E., Yang, H., … Lin, H. (2018). Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLOS ONE, 13(7), e0201142. doi:10.1371/journal.pone.0201142Tajbakhsh, N., Shin, J. Y., Gurudu, S. R., Hurst, R. T., Kendall, C. B., Gotway, M. B., & Liang, J. (2016). Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Transactions on Medical Imaging, 35(5), 1299-1312. doi:10.1109/tmi.2016.2535302Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 855-868. doi:10.1109/tpami.2008.13

    Supervised machine learning based multi-task artificial intelligence classification of retinopathies

    Full text link
    Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.Comment: Supplemental material attached at the en

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort
    corecore