16 research outputs found

    RVD: A Handheld Device-Based Fundus Video Dataset for Retinal Vessel Segmentation

    Full text link
    Retinal vessel segmentation is generally grounded in image-based datasets collected with bench-top devices. The static images naturally lose the dynamic characteristics of retina fluctuation, resulting in diminished dataset richness, and the usage of bench-top devices further restricts dataset scalability due to its limited accessibility. Considering these limitations, we introduce the first video-based retinal dataset by employing handheld devices for data acquisition. The dataset comprises 635 smartphone-based fundus videos collected from four different clinics, involving 415 patients from 50 to 75 years old. It delivers comprehensive and precise annotations of retinal structures in both spatial and temporal dimensions, aiming to advance the landscape of vasculature segmentation. Specifically, the dataset provides three levels of spatial annotations: binary vessel masks for overall retinal structure delineation, general vein-artery masks for distinguishing the vein and artery, and fine-grained vein-artery masks for further characterizing the granularities of each artery and vein. In addition, the dataset offers temporal annotations that capture the vessel pulsation characteristics, assisting in detecting ocular diseases that require fine-grained recognition of hemodynamic fluctuation. In application, our dataset exhibits a significant domain shift with respect to data captured by bench-top devices, thus posing great challenges to existing methods. In the experiments, we provide evaluation metrics and benchmark results on our dataset, reflecting both the potential and challenges it offers for vessel segmentation tasks. We hope this challenging dataset would significantly contribute to the development of eye disease diagnosis and early prevention

    Deep Feature Fusion Network for Computer-aided Diagnosis of Glaucoma using Optical Coherence Tomography

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ๋ฐ”์ด์˜ค์—”์ง€๋‹ˆ์–ด๋ง์ „๊ณต, 2017. 8. ๊น€ํฌ์ฐฌ.Glaucoma has been able to be diagnosed noninvasively by analyzing the optic disc thickness with the development of optical coherence tomography. However, it is essential to maintain proper intraocular pressure through early diagnosis of glaucoma. Therefore, it is required to develop a computer-aided diagnosis system to accurately and objectively analyze glaucoma of early stage. In this paper, we propose deep feature fusion network for realizing computer-aided system which can accurately diagnose early glaucoma and verify the clinical efficacy through performance evaluation using patient images. Deep feature fusion network is analyzed by fusing features which are extracted by feature-based classification used in machine learning and by deep learning in deep neural network. Deep feature fusion network is deep neural network composed of heterogeneous features extracted through image processing and deep learning. The area and depth features of optic nerve defects related to glaucoma were extracted by using traditional image processing methods and the features related to distinction between glaucoma and normal subjects were extracted from the middle layer output of the deep neural network. Deep feature fusion network was developed by fusing extracted features. We analyzed features based on image processing using thickness map and deviation map of retinal nerve fiber layer and ganglion cell inner plexiform layer in order to extract features related to the area of the optic nerve defects. Optic nerve defects were segmented in each deviation map by three criteria and the area of the defects was calculated about 69 glaucoma patients and 79 normal subjects. The performance of the severity indices calculated by defects area was evaluated by the area under ROC curve (AUC). There were significant differences between glaucoma patients and normal subjects in all severity indices (p < 0.0001) and correctly distinguished between glaucoma patients and normal subjects (AUC = 0.91 to 0.95). This suggests that the area features of optic nerve defects can be used as an objective indicator of glaucoma diagnosis. We analyzed features based on another image processing using retinal nerve fiber layer thickness map and deviation map to extract the features related to the depth of the optic nerve defects. Depth related index was developed by using the ratio of the optic nerve thickness of the normal to the optic nerve thickness in the optic nerve defects analyzed by the deviation map. 108 early glaucoma patients, 96 moderate glaucoma patients, and 111 severe glaucoma patients were analyzed by using depth index and the performance was evaluated by AUC. There were significant differences between the groups in the index (p < 0.001) and the index discriminated between moderate glaucoma patients and severe glaucoma patients (AUC = 0.97) as well as early glaucoma patients and moderate glaucoma patients (AUC = 0.98). It was found that the depth index of the optic nerve defects were a significant feature to distinguish the degree of glaucoma. Two methods were used to apply thickness map to deep learning. One method is deep learning using randomly distributed weights in LeNet and the other method is deep learning using weights pre-trained by other large image data in VGGNet. We analyzed two methods for 316 normal subjects, 226 glaucoma patients of early stage, and 246 glaucoma patients of moderate and severe stage and evaluated performance through AUC for each groups. Deep neural networks learned with LeNet and VGGNet distinguished normal subjects not only from glaucoma patients (AUC = 0.94, 0.94), but also from glaucoma patients of early stage (AUC = 0.88, 0.89). It was found that two deep learning methods extract the features related to glaucoma. Finally, we developed deep feature fusion network by fusing the features extracted from image processing and the features extracted by deep learning and compared the performance with the previous studies though AUC. Deep feature fusion network fusing the features extracted in VGGNet correctly distinguished normal subjects not only from glaucoma patients (AUC = 0.96), but also from glaucoma patients of early stage (AUC = 0.92). This network is superior to the previous study (AUC = 0.91, 0.82). It showed excellent performance in distinguishing early glaucoma patients from normal subjects particularly. These results show that the proposed deep feature fusion network provides higher accuracy in diagnosis and early diagnosis of glaucoma than any other previous methods. It is expected that further accuracy of the features will be improved if additional features of demographic information and various glaucoma test results are added to deep feature fusion network. Deep feature fusion network proposed in this paper is expected to be applicable not only to early diagnosis of glaucoma but also to analyze progress of glaucoma.Chapter 1 : General Introduction 1 1.1. Glaucoma 2 1.2. Optical Coherence Tomography 5 1.3. Thesis Objectives 7 Chapter 2 : Feature Extraction for Glaucoma Diagnosis 1. Severity Index of Macular GCIPL and Peripapillary RNFL Deviation Maps 9 2.1. Introduction 10 2.2. Methods 12 2.2.1. Study subjects 12 2.2.2. Red-free RNFL photography 14 2.2.3. Cirrus OCT imaging 15 2.2.4. Deviation map analysis protocol 17 2.2.5. Statistical analysis 21 2.3. Results 23 2.4. Discussion 33 Chapter 3 : Feature Extraction for Glaucoma Diagnosis 2. RNFL Defect Depth Percentage Index of Thickness Deviation Maps 41 3.1. Introduction 42 3.2. Methods 44 3.2.1. Subjects 44 3.2.2. Red-free fundus photography imaging 46 3.2.3. Optical coherence tomography retinal nerve fiber layer imaging 51 3.2.4. Measuring depth of retinal nerve fiber layer defects on cirrus high-definition optical coherence tomography derived deviation map 52 3.2.5. Data analysis 57 3.3. Results 58 3.4. Discussion 69 Chapter 4 : Glaucoma Classification using Deep Feature Fusion Network 74 4.1. Introduction 75 4.2. Methods 77 4.2.1. Study subjects 77 4.2.2. OCT imaging 79 4.2.3. Deep Feature Fusion Network 81 4.2.4. Statistical analysis 88 4.3. Results 90 4.4. Discussion 105 Chapter 5 : Thesis Summary and Future Work 111 5.1 Thesis Summary and Contribution 112 5.2 Future Work 115 Bibliography 117 Abstract in Korean 125Docto
    corecore