8 research outputs found

    Multi-dataset Training for Medical Image Segmentation as a Service

    Get PDF
    Deep Learning tools are widely used for medical image segmentation. The results produced by these techniques depend to a great extent on the data sets used to train the used network. Nowadays many cloud service providers offer the required resources to train networks and deploy deep learning networks. This makes the idea of segmentation as a cloud-based service attractive. In this paper we study the possibility of training, a generalized configurable, Keras U-Net to test the feasibility of training with images acquired, with specific instruments, to perform predictions on data from other instruments. We use, as our application example, the segmentation of Optic Disc and Cup which can be applied to glaucoma detection. We use two publicly available data sets (RIM-One V3 and DRISHTI) to train either independently or combining their data.Ministerio de Economía y Competitividad TEC2016-77785-

    Energy efficiency in Edge TPU vs. embedded GPU for computer-aided medical imaging segmentation and classification

    Get PDF
    Manuscrito enviado para su revisión por la revista "Engineering Applications of Artificial Intelligence" (Elsevier) el 25 de noviembre de 2022. Se envió la versión revisada el 26 de julio de 2023. El manuscrito fue aceptado el 11 de octubre de 2023, y desde el 28 de octubre aparece el artículo publicado en el portal ScienceDirect (https://doi.org/10.1016/j.engappai.2023.107298).In this work, we evaluate the energy usage of fully embedded medical diagnosis aids based on both segmentation and classification of medical images implemented on Edge TPU and embedded GPU processors. We use glaucoma diagnosis based on color fundus images as an example to show the possibility of performing segmentation and classification in real time on embedded boards and to highlight the different energy requirements of the studied implementations. Several other works develop the use of segmentation and feature extraction techniques to detect glaucoma, among many other pathologies, with deep neural networks. Memory limitations and low processing capabilities of embedded accelerated systems (EAS) limit their use for deep network-based system training. However, including specific acceleration hardware, such as NVIDIA’s Maxwell GPU or Google’s Edge TPU, enables them to perform inferences using complex pre-trained networks in very reasonable times. In this study, we evaluate the timing and energy performance of two EAS equipped with Machine Learning (ML) accelerators executing an example diagnostic tool developed in a previous work. For optic disc (OD) and cup (OC) segmentation, the obtained prediction times per image are under 29 and 43 ms using Edge TPUs and Maxwell GPUs respectively. Prediction times for the classification subsystem are lower than 10 and 14 ms for Edge TPUs and Maxwell GPUs respectively. Regarding energy usage, in approximate terms, for OD segmentation Edge TPUs and Maxwell GPUs use 38 and 190 mJ per image respectively. For fundus classification, Edge TPUs and Maxwell GPUs use 45 and 70 mJ respectively.Manuscrito de 33 páginas

    Dense Fully Convolutional Segmentation of the Optic Disc and Cup in Colour Fundus for Glaucoma Diagnosis

    Get PDF
    Glaucoma is a group of eye diseases which can cause vision loss by damaging the optic nerve. Early glaucoma detection is key to preventing vision loss yet there is a lack of noticeable early symptoms. Colour fundus photography allows the optic disc (OD) to be examined to diagnose glaucoma. Typically, this is done by measuring the vertical cup-to-disc ratio (CDR); however, glaucoma is characterised by thinning of the rim asymmetrically in the inferior-superior-temporal-nasal regions in increasing order. Automatic delineation of the OD features has potential to improve glaucoma management by allowing for this asymmetry to be considered in the measurements. Here, we propose a new deep-learning-based method to segment the OD and optic cup (OC). The core of the proposed method is DenseNet with a fully-convolutional network, whose symmetric U-shaped architecture allows pixel-wise classification. The predicted OD and OC boundaries are then used to estimate the CDR on two axes for glaucoma diagnosis. We assess the proposed method’s performance using a large retinal colour fundus dataset, outperforming state-of-the-art segmentation methods. Furthermore, we generalise our method to segment four fundus datasets from different devices without further training, outperforming the state-of-the-art on two and achieving comparable results on the remaining two

    Système d'apprentissage multitâche dédié à la segmentation des lésions sombres et claires de la rétine dans les images de fond d'oeil

    Get PDF
    Le travail de recherche mené dans le cadre de cette maîtrise porte sur l’exploitation de l’imagerie de la rétine à des fins de diagnostic automatique. Il se concentre sur l’image de fond d’oeil, qui donne accès à une représentation en deux dimensions et en couleur de la surface de la rétine. Ces images peuvent présenter des symptômes de maladie, sous forme de lésions ou de déformations des structures anatomiques de la rétine. L’objet de cette maîtrise est de proposer une méthodologie de segmentation simultanée de ces lésions dans l’image de fond d’oeil, regroupées en deux catégories : claires ou sombres. Réaliser cette double segmentation de façon simultanée est inédit : la vaste majorité des travaux précédents se concentrant sur un seul type de lésions. Or, du fait des contraintes de temps et de la difficulté que cela représente dans un environnement clinique, il est impossible pour un clinicien de tester la multitude d’algorithmes existants. D’autant plus que lorsqu’un patient se présente pour un examen, le clinicien n’a aucune connaissance a priori sur le type de pathologie et par conséquent sur le type d’algorithme à utiliser. Pour envisager une utilisation clinique, il est donc important de réfléchir à une solution polyvalente, rapide et aisément déployable. Parallèlement, l’apprentissage profond a démontré sa capacité à s’adapter à de nombreux problèmes de visions par ordinateur et à généraliser ses performances sur des données variées malgré des ensembles d’entraînement parfois restreints. Pour cela, de nouvelles stratégies sont régulièrement proposées, ambitionnant d’extraire toujours mieux les informations issues de la base d’entraînement. En conséquence, nous nous sommes fixés pour objectif de développer une architecture de réseaux de neurones capable de rechercher toutes les lésions dans une image de fond d’oeil. Pour répondre à cet objectif, notre méthodologie s’appuie sur une nouvelle architecture de réseaux de neurones convolutifs reposant sur une structure multitâche entraînée selon une approche hybride faisant appel à de l’apprentissage supervisé et faiblement supervisé. L’architecture se compose d’un encodeur partagé par deux décodeurs spécialisés chacun dans un type de lésions. Ainsi, les mêmes caractéristiques sont extraites par l’encodeur pour les deux décodeurs. Dans un premier temps, le réseau est entraîné avec des régions d’images et la vérité terrain correspondante indiquant les lésions (apprentissage supervisé). Dans un second temps, seul l’encodeur est ré-entraîné avec des images complètes avec une vérité terrain composé d’un simple scalaire indiquant si l’image présente des pathologies ou non, sans préciser leur position et leur type (apprentissage faiblement supervisé).----------ABSTRACT: This work focuses on automatic diagnosis on fundus images, which are a bidimensional representation of the inner structure of the eye. The aim of this master’s thesis is to discuss a solution for an automatic segmentation of the lesions that can be observed in the retina. The proposed methodology regroups those lesions in two categories: red and bright. Obtaining a simultaneous double segmentation is a novel approach; most of the previous works focus on the detection of a single type of lesions. However, due to time constraints and the tedeous nature of this work, clinicians usually can not test all the existing methods. Moreover, from a screening perspective, the clinician has no clue a priori on the nature of the pathology he deals with and thus on which algorithm to start with. Therefore, the proposed algorithm requires to be versatile, fast and easily deployable. Conforted by the recent progresses obtained with machine learning methods (and especially deep learning), we decide to develop a novel convolutional neural network able to segment both types of lesions on fundus images. To reach this goal, our methodology relies on a new multitask architecture, trained on a hybrid method combining weak and normal supervised training. The architecture relies on hard parameter sharing: two decoders (one per type of lesion) share a single encoder. Therefore, the encoder is trained on deriving an abstrast representation of the input image. Those extracted features permit a discrimination between both bright and red lesions. In other words, the encoder is trained on detecting pathological tissues from normal ones. The training is done in two steps. During the first one, the whole architecture is trained with patches, with a groundtruth at a pixel level, which is the typical way of training a segmentation network. The second step consists in weak supervision. Only the encoder is trained with full images and its task is to predict the status of the given image (pathological or healthy), without specifying anything concerning the potential lesions in it (neither location nor type). In this case, the groundtruth is a simple boolean number. This second step allows the network to see a larger number of images: indeed, this type of groundtruth is considerably easier to acquire and already available in large public databases. This step relies on the hypothesis that it is possible to use an annotation at an image level (globally) to enhance the performance at a pixel level (locally). This is an intuitive idea, as the pathological status is directly correlated with the presence of lesions

    Aspectos do rastreamento do glaucoma auxiliados por técnicas automatizadas em imagens com menor qualidade do disco óptico

    Get PDF
    O glaucoma é uma neuropatia óptica cuja progressão pode levar a cegueira. Representa a principal causa de perda visual de caráter irreversível em todo o mundo para homens e mulheres. A detecção precoce através de programas de rastreamento feita por especialistas é baseada nas características do nervo óptico, em biomarcadores oftalmológicos (destacando-se a pressão ocular) e exames subsidiários, com destaque ao campo visual e OCT. Após o reconhecimento dos casos é feito o tratamento com finalidade de estacionar a progressão da doença e melhorar a qualidade de vida dos pacientes. Contudo, estes programas têm limitações, principalmente em locais mais distantes dos grandes centros de tratamento especializado, insuficiência de equipamentos básicos e pessoal especializado para oferecer o rastreamento a toda a população, faltam meios para locomoção a estes centros, desinformação e desconhecimento da doença, além de características de progressão assintomática da doença. Esta tese aborda soluções inovadoras que podem contribuir para a automação do rastreamento do glaucoma utilizando aparelhos portáteis e mais baratos, considerando as necessidades reais dos clínicos durante o rastreamento. Para isso foram realizadas revisões sistemáticas sobre os métodos e equipamentos para apoio à triagem automática do glaucoma e os métodos de aprendizado profundo para a segmentação e classificação aplicáveis. Também foi feito um levantamento de questões médicas relativas à triagem do glaucoma e associá-las ao campo da inteligência artificial, para dar mais sentido as metodologias automatizadas. Além disso, foi criado um banco de dados privado, com vídeos e imagens de retina adquiridos por um smartphone acoplado a lente de baixo custo para o rastreamento do glaucoma e avaliado com métodos do estado da arte. Foram avaliados e analisados métodos de detecção automática de glaucoma utilizando métodos de aprendizado profundo de segmentação do disco e do copo óptico em banco de dados públicos de imagens de retina. Finalmente, foram avaliadas técnicas de mosaico e de detecção da cabeça do nervo óptico em imagens de baixa qualidade obtidas para pré-processamento de imagens adquiridas por smartphones acoplados a lente de baixo custo.Glaucoma is an optic neuropathy whose progression can lead to blindness. It represents the leading cause of irreversible visual loss worldwide for men and women. Early detection through screening programs carried out by specialists is based on the characteristics of the optic papilla, ophthalmic biomarkers (especially eye pressure), and subsidiary exams, emphasizing the visual field and optical coherence tomography (OCT). After recognizing the cases, the treatment is carried out to stop the progression of the disease and improve the quality of patients’ life. However, these screening programs have limitations, particularly in places further away from the sizeable, specialized treatment centers, due to the lack of essential equipment and technical personnel to offer screening to the entire population, due to the lack of means of transport to these centers, due to lack of information and lack of knowledge about the disease, considering the characteristics of asymptomatic progression of the disease. This thesis aims to develop innovative approaches to contribute to the automation of glaucoma screening using portable and cheaper devices, considering the real needs of clinicians during screening. For this, systematic reviews were carried out on the methods and equipment to support automatic glaucoma screening, and the applicable deep learning methods for segmentation and classification. A survey of medical issues related to glaucoma screening was carried out and associated with the field of artificial intelligence to make automated methodologies more effective. In addition, a private dataset was created, with videos and retina images acquired using a low-cost lens-coupled cell phone, for glaucoma screening and evaluated with state-of-the-art methods. Methods of automatic detection of glaucoma using deep learning methods of segmentation of the disc and optic cup were evaluated and analyzed in a public database of retinal images. In the case of deep learning classification methods, these were evaluated in public databases of retina images and in a private database with low-cost images. Finally, mosaic and object detection techniques were evaluated in low-quality images obtained for pre-processing images acquired by cell phones coupled with low-cost lenses

    REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs

    Full text link
    [EN] Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MIC-CAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.This work was supported by the Christian Doppler Research Association, the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development, J.I.O is supported by WWTF (Medical University of Vienna: AugUniWien/FA7464A0249, University of Vienna: VRG12- 009). Team Masker is supported by Natural Science Foundation of Guangdong Province of China (Grant 2017A030310647). Team BUCT is partially supported by the National Natural Science Foundation of China (Grant 11571031). The authors would also like to thank REFUGE study group for collaborating with this challenge.Orlando, JI.; Fu, H.; Breda, JB.; Van Keer, K.; Bathula, DR.; Diaz-Pinto, A.; Fang, R.... (2020). REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical Image Analysis. 59:1-21. https://doi.org/10.1016/j.media.2019.101570S12159Abramoff, M. D., Garvin, M. K., & Sonka, M. (2010). Retinal Imaging and Image Analysis. IEEE Reviews in Biomedical Engineering, 3, 169-208. doi:10.1109/rbme.2010.2084567Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digital Medicine, 1(1). doi:10.1038/s41746-018-0040-6Al-Bander, B., Williams, B., Al-Nuaimy, W., Al-Taee, M., Pratt, H., & Zheng, Y. (2018). Dense Fully Convolutional Segmentation of the Optic Disc and Cup in Colour Fundus for Glaucoma Diagnosis. Symmetry, 10(4), 87. doi:10.3390/sym10040087Almazroa, A., Burman, R., Raahemifar, K., & Lakshminarayanan, V. (2015). Optic Disc and Optic Cup Segmentation Methodologies for Glaucoma Image Detection: A Survey. Journal of Ophthalmology, 2015, 1-28. doi:10.1155/2015/180972Burlina, P. M., Joshi, N., Pekala, M., Pacheco, K. D., Freund, D. E., & Bressler, N. M. (2017). Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmology, 135(11), 1170. doi:10.1001/jamaophthalmol.2017.3782Carmona, E. J., Rincón, M., García-Feijoó, J., & Martínez-de-la-Casa, J. M. (2008). Identification of the optic nerve head with genetic algorithms. Artificial Intelligence in Medicine, 43(3), 243-259. doi:10.1016/j.artmed.2008.04.005Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16, 321-357. doi:10.1613/jair.953Christopher, M., Belghith, A., Bowd, C., Proudfoot, J. A., Goldbaum, M. H., Weinreb, R. N., … Zangwill, L. M. (2018). Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs. Scientific Reports, 8(1). doi:10.1038/s41598-018-35044-9De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Decencière, E., Zhang, X., Cazuguel, G., Lay, B., Cochener, B., Trone, C., … Klein, J.-C. (2014). FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE. Image Analysis & Stereology, 33(3), 231. doi:10.5566/ias.1155DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics, 44(3), 837. doi:10.2307/2531595European Glaucoma Society Terminology and Guidelines for Glaucoma, 4th Edition - Part 1Supported by the EGS Foundation. (2017). British Journal of Ophthalmology, 101(4), 1-72. doi:10.1136/bjophthalmol-2016-egsguideline.001Farbman, Z., Fattal, R., Lischinski, D., & Szeliski, R. (2008). Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics, 27(3), 1-10. doi:10.1145/1360612.1360666Fu, H., Cheng, J., Xu, Y., Wong, D. W. K., Liu, J., & Cao, X. (2018). Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Transactions on Medical Imaging, 37(7), 1597-1605. doi:10.1109/tmi.2018.2791488Gómez-Valverde, J. J., Antón, A., Fatti, G., Liefers, B., Herranz, A., Santos, A., … Ledesma-Carbayo, M. J. (2019). Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomedical Optics Express, 10(2), 892. doi:10.1364/boe.10.000892Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Hagiwara, Y., Koh, J. E. W., Tan, J. H., Bhandary, S. V., Laude, A., Ciaccio, E. J., … Acharya, U. R. (2018). Computer-aided diagnosis of glaucoma using fundus images: A review. Computer Methods and Programs in Biomedicine, 165, 1-12. doi:10.1016/j.cmpb.2018.07.012Haleem, M. S., Han, L., van Hemert, J., & Li, B. (2013). Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review. Computerized Medical Imaging and Graphics, 37(7-8), 581-596. doi:10.1016/j.compmedimag.2013.09.005Holm, S., Russell, G., Nourrit, V., & McLoughlin, N. (2017). DR HAGIS—a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. Journal of Medical Imaging, 4(1), 014503. doi:10.1117/1.jmi.4.1.014503Joshi, G. D., Sivaswamy, J., & Krishnadas, S. R. (2011). Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 30(6), 1192-1205. doi:10.1109/tmi.2011.2106509Kaggle, 2015. Diabetic Retinopathy Detection. https://www.kaggle.com/c/diabetic-retinopathy-detection. [Online; accessed 10-January-2019].Kumar, J. R. H., Seelamantula, C. S., Kamath, Y. S., & Jampala, R. (2019). Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports, 9(1). doi:10.1038/s41598-019-43385-2Lavinsky, F., Wollstein, G., Tauber, J., & Schuman, J. S. (2017). The Future of Imaging in Detecting Glaucoma Progression. Ophthalmology, 124(12), S76-S82. doi:10.1016/j.ophtha.2017.10.011Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791Li, Z., He, Y., Keel, S., Meng, W., Chang, R. T., & He, M. (2018). Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology, 125(8), 1199-1206. doi:10.1016/j.ophtha.2018.01.023Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., … Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60-88. doi:10.1016/j.media.2017.07.005Liu, S., Graham, S. L., Schulz, A., Kalloniatis, M., Zangerl, B., Cai, W., … You, Y. (2018). A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmology Glaucoma, 1(1), 15-22. doi:10.1016/j.ogla.2018.04.002Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., Fletcher, E., & Kennedy, L. (2004). Optic Nerve Head Segmentation. IEEE Transactions on Medical Imaging, 23(2), 256-264. doi:10.1109/tmi.2003.823261Maier-Hein, L., Eisenmann, M., Reinke, A., Onogur, S., Stankovic, M., Scholz, P., … Kopp-Schneider, A. (2018). Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications, 9(1). doi:10.1038/s41467-018-07619-7Miri, M. S., Abramoff, M. D., Lee, K., Niemeijer, M., Wang, J.-K., Kwon, Y. H., & Garvin, M. K. (2015). Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Transactions on Medical Imaging, 34(9), 1854-1866. doi:10.1109/tmi.2015.2412881Niemeijer, M., van Ginneken, B., Cree, M. J., Mizutani, A., Quellec, G., Sanchez, C. I., … Abramoff, M. D. (2010). Retinopathy Online Challenge: Automatic Detection of Microaneurysms in Digital Color Fundus Photographs. IEEE Transactions on Medical Imaging, 29(1), 185-195. doi:10.1109/tmi.2009.2033909Odstrcilik, J., Kolar, R., Budai, A., Hornegger, J., Jan, J., Gazarek, J., … Angelopoulou, E. (2013). Retinal vessel segmentation by improved matched filtering: evaluation on a new high‐resolution fundus image database. IET Image Processing, 7(4), 373-383. doi:10.1049/iet-ipr.2012.0455Orlando, J. I., Prokofyeva, E., & Blaschko, M. B. (2017). A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Transactions on Biomedical Engineering, 64(1), 16-27. doi:10.1109/tbme.2016.2535311Park, S. J., Shin, J. Y., Kim, S., Son, J., Jung, K.-H., & Park, K. H. (2018). A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training. Journal of Korean Medical Science, 33(43). doi:10.3346/jkms.2018.33.e239Poplin, R., Varadarajan, A. V., Blumer, K., Liu, Y., McConnell, M. V., Corrado, G. S., … Webster, D. R. (2018). Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3), 158-164. doi:10.1038/s41551-018-0195-0Porwal, P., Pachade, S., Kamble, R., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., & Meriaudeau, F. (2018). Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data, 3(3), 25. doi:10.3390/data3030025Prokofyeva, E., & Zrenner, E. (2012). Epidemiology of Major Eye Diseases Leading to Blindness in Europe: A Literature Review. Ophthalmic Research, 47(4), 171-188. doi:10.1159/000329603Raghavendra, U., Fujita, H., Bhandary, S. V., Gudigar, A., Tan, J. H., & Acharya, U. R. (2018). Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Information Sciences, 441, 41-49. doi:10.1016/j.ins.2018.01.051Reis, A. S. C., Sharpe, G. P., Yang, H., Nicolela, M. T., Burgoyne, C. F., & Chauhan, B. C. (2012). Optic Disc Margin Anatomy in Patients with Glaucoma and Normal Controls with Spectral Domain Optical Coherence Tomography. Ophthalmology, 119(4), 738-747. doi:10.1016/j.ophtha.2011.09.054Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. doi:10.1007/s11263-015-0816-ySchmidt-Erfurth, U., Sadeghipour, A., Gerendas, B. S., Waldstein, S. M., & Bogunović, H. (2018). Artificial intelligence in retina. Progress in Retinal and Eye Research, 67, 1-29. doi:10.1016/j.preteyeres.2018.07.004Sevastopolsky, A. (2017). Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis, 27(3), 618-624. doi:10.1134/s1054661817030269Taha, A. A., & Hanbury, A. (2015). Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging, 15(1). doi:10.1186/s12880-015-0068-xThakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Johnson, S. S., Wang, J.-K., Islam, M. S., Thurtell, M. J., Kardon, R. H., & Garvin, M. K. (2018). Local Estimation of the Degree of Optic Disc Swelling from Color Fundus Photography. Lecture Notes in Computer Science, 277-284. doi:10.1007/978-3-030-00949-6_33Trucco, E., Ruggeri, A., Karnowski, T., Giancardo, L., Chaum, E., Hubschman, J. P., … Dhillon, B. (2013). Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal. Investigative Opthalmology & Visual Science, 54(5), 3546. doi:10.1167/iovs.12-10347Vergara, I. A., Norambuena, T., Ferrada, E., Slater, A. W., & Melo, F. (2008). StAR: a simple tool for the statistical comparison of ROC curves. BMC Bioinformatics, 9(1). doi:10.1186/1471-2105-9-265Wu, Z., Shen, C., & van den Hengel, A. (2019). Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90, 119-133. doi:10.1016/j.patcog.2019.01.006Zheng, Y., Hijazi, M. H. A., & Coenen, F. (2012). Automated «Disease/No Disease» Grading of Age-Related Macular Degeneration by an Image Mining Approach. Investigative Opthalmology & Visual Science, 53(13), 8310. doi:10.1167/iovs.12-957
    corecore