706 research outputs found
ARIAS: Automated Retinal Image Analysis System
In this paper, a system for automated analysis of retinal images is proposed. This system segments blood vessels in retinal images and recognizes the main features of the fundus on digital color images. The recognized features were defined as blood vessels, optic disc, and fovea. An algorithm called 2D matched filters response has been proposed for the detection of blood vessels. Also, automatic recognition and localization methods for optic disc and fovea have been introduced and discussed. Moreover, a method for detecting left and right retinal fundus images has been presented
Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method
The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them
Accurate and reliable segmentation of the optic disc in digital fundus images
We describe a complete pipeline for the detection and accurate automatic segmentation of the optic disc in digital fundus images. This procedure provides separation of vascular information and accurate inpainting of vessel-removed images, symmetry-based optic disc localization, and fitting of incrementally complex contour models at increasing resolutions using information related to inpainted images and vessel masks. Validation experiments, performed on a large dataset of images of healthy and pathological eyes, annotated by experts and partially graded with a quality label, demonstrate the good performances of the proposed approach. The method is able to detect the optic disc and trace its contours better than the other systems presented in the literature and tested on the same data. The average error in the obtained contour masks is reasonably close to the interoperator errors and suitable for practical applications. The optic disc segmentation pipeline is currently integrated in a complete software suite for the semiautomatic quantification of retinal vessel properties from fundus camera images (VAMPIRE)
UOLO - automatic object detection and segmentation in biomedical images
We propose UOLO, a novel framework for the simultaneous detection and
segmentation of structures of interest in medical images. UOLO consists of an
object segmentation module which intermediate abstract representations are
processed and used as input for object detection. The resulting system is
optimized simultaneously for detecting a class of objects and segmenting an
optionally different class of structures. UOLO is trained on a set of bounding
boxes enclosing the objects to detect, as well as pixel-wise segmentation
information, when available. A new loss function is devised, taking into
account whether a reference segmentation is accessible for each training image,
in order to suitably backpropagate the error. We validate UOLO on the task of
simultaneous optic disc (OD) detection, fovea detection, and OD segmentation
from retinal images, achieving state-of-the-art performance on public datasets.Comment: Publised on DLMIA 2018. Licensed under the Creative Commons
CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0
Optic nerve head segmentation
Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 /spl mu//pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83% of cases, and performs well even on blurred image
A method for quantifying sectoral optic disc pallor in fundus photographs and its association with peripapillary RNFL thickness
Purpose: To develop an automatic method of quantifying optic disc pallor in
fundus photographs and determine associations with peripapillary retinal nerve
fibre layer (pRNFL) thickness.
Methods: We used deep learning to segment the optic disc, fovea, and vessels
in fundus photographs, and measured pallor. We assessed the relationship
between pallor and pRNFL thickness derived from optical coherence tomography
scans in 118 participants. Separately, we used images diagnosed by clinical
inspection as pale (N=45) and assessed how measurements compared to healthy
controls (N=46). We also developed automatic rejection thresholds, and tested
the software for robustness to camera type, image format, and resolution.
Results: We developed software that automatically quantified disc pallor
across several zones in fundus photographs. Pallor was associated with pRNFL
thickness globally (\b{eta} = -9.81 (SE = 3.16), p < 0.05), in the temporal
inferior zone (\b{eta} = -29.78 (SE = 8.32), p < 0.01), with the nasal/temporal
ratio (\b{eta} = 0.88 (SE = 0.34), p < 0.05), and in the whole disc (\b{eta} =
-8.22 (SE = 2.92), p < 0.05). Furthermore, pallor was significantly higher in
the patient group. Lastly, we demonstrate the analysis to be robust to camera
type, image format, and resolution.
Conclusions: We developed software that automatically locates and quantifies
disc pallor in fundus photographs and found associations between pallor
measurements and pRNFL thickness.
Translational relevance: We think our method will be useful for the
identification, monitoring and progression of diseases characterized by disc
pallor/optic atrophy, including glaucoma, compression, and potentially in
neurodegenerative disorders.Comment: 44 pages, 20 figures, 7 tables, submitte
- …