1,962 research outputs found

    Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method

    Get PDF
    The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them

    Retinal vessel segmentation using Gabor Filter and Textons

    Get PDF
    This paper presents a retinal vessel segmentation method that is inspired by the human visual system and uses a Gabor filter bank. Machine learning is used to optimize the filter parameters for retinal vessel extraction. The filter responses are represented as textons and this allows the corresponding membership functions to be used as the framework for learning vessel and non-vessel classes. Then, vessel texton memberships are used to generate segmentation results. We evaluate our method using the publicly available DRIVE database. It achieves competitive performance (sensitivity=0.7673, specificity=0.9602, accuracy=0.9430) compared to other recently published work. These figures are particularly interesting as our filter bank is quite generic and only includes Gabor responses. Our experimental results also show that the performance, in terms of sensitivity, is superior to other methods

    Joint segmentation and classification of retinal arteries/veins from fundus images

    Full text link
    Objective Automatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation. Methods A convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree. Results The method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%. Conclusion The results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest. Significance The proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin

    Retinal Blood Vessel Extraction from Fundus Images Using Enhancement Filtering and Clustering

    Get PDF
    Screening of vision troubling eye diseases by segmenting fundus images eases the danger of loss of sight of people. Computer assisted analysis can play an important role in the forthcoming health care system universally. Therefore, this paper presents a clustering based method for extraction of retinal vasculature from ophthalmoscope images. The method starts with image enhancement by contrast limited adaptive histogram equalization (CLAHE) from which feature extraction is accomplished using Gabor filter followed by enhancement of extracted features with Hessian based enhancement filters. It then extracts the vessels using K-mean clustering technique. Finally, the method ends with the application of a morphological cleaning operation to get the ultimate vessel segmented image. The performance of the proposed method is evaluated by taking two different publicly available Digital retinal images for vessel extraction (DRIVE) and Child heart and health study in England (CHASE_DB1) databases using nine different performance matrices. It gives average accuracies of 0.952 and 0.951 for DRIVE and CHASE_DB1 databases, respectively.    
    • …
    corecore