6,351 research outputs found

    Joint segmentation and classification of retinal arteries/veins from fundus images

    Full text link
    Objective Automatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation. Methods A convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree. Results The method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%. Conclusion The results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest. Significance The proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin

    Measurement of retinal vessel widths from fundus images based on 2-D modeling

    Get PDF
    Changes in retinal vessel diameter are an important sign of diseases such as hypertension, arteriosclerosis and diabetes mellitus. Obtaining precise measurements of vascular widths is a critical and demanding process in automated retinal image analysis as the typical vessel is only a few pixels wide. This paper presents an algorithm to measure the vessel diameter to subpixel accuracy. The diameter measurement is based on a two-dimensional difference of Gaussian model, which is optimized to fit a two-dimensional intensity vessel segment. The performance of the method is evaluated against Brinchmann-Hansen's half height, Gregson's rectangular profile and Zhou's Gaussian model. Results from 100 sample profiles show that the presented algorithm is over 30% more precise than the compared techniques and is accurate to a third of a pixel

    Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method

    Get PDF
    The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them

    Detection of Hard Exudates in Retinal Fundus Images using Deep Learning

    Full text link
    Diabetic Retinopathy (DR) is a retinal disorder that affects the people having diabetes mellitus for a long time (20 years). DR is one of the main reasons for the preventable blindness all over the world. If not detected early the patient may progress to severe stages of irreversible blindness. Lack of Ophthalmologists poses a serious problem for the growing diabetes patients. It is advised to develop an automated DR screening system to assist the Ophthalmologist in decision making. Hard exudates develop when DR is present. It is important to detect hard exudates in order to detect DR in an early stage. Research has been done to detect hard exudates using regular image processing techniques and Machine Learning techniques. Here, a deep learning algorithm has been presented in this paper that detects hard exudates in fundus images of the retina.Comment: 5 Pages, 3 figures, 2 tables, International Conference on Systems, Computation, Automation and Networking http://icscan.in

    Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models

    Full text link
    The health and function of tissue rely on its vasculature network to provide reliable blood perfusion. Volumetric imaging approaches, such as multiphoton microscopy, are able to generate detailed 3D images of blood vessels that could contribute to our understanding of the role of vascular structure in normal physiology and in disease mechanisms. The segmentation of vessels, a core image analysis problem, is a bottleneck that has prevented the systematic comparison of 3D vascular architecture across experimental populations. We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture, which we call DeepVess, yielded a segmentation accuracy that was better than both the current state-of-the-art and a trained human annotator, while also being orders of magnitude faster. To explore the effects of aging and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of cortical blood vessels in young and old mouse models of Alzheimer's disease and wild type littermates. We found little difference in the distribution of capillary diameter or tortuosity between these groups, but did note a decrease in the number of longer capillary segments (>75μm>75\mu m) in aged animals as compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure
    corecore