1,613 research outputs found
A Comparative Study of Different Blood Vessel Detection on Retinal Images
Detection of blood vessel plays an important stage in different medical areas, such as ophthalmology, oncology, neurosurgery, and laryngology. The significance of the vessel analysis was helped by the continuous overview in clinical studies of new medical technologies intended for improving the visualization of vessels. In this paper, several local segmentation techniques which include such as Vascular Tree Extraction, Tyler L. Coye and Line tracking, Kirsch’s Template and Fuzzy C Mean methods were studied. The main objective is to determine the best approaches in order to detect the blood vessel on the degraded retinal input image (DRIVE dataset). A few Image Quality Assessment (IQA) was obtained to prove the effectiveness of each detection methods. Overall, the result of sensitivity highest came from Kirsch Templates (96.928), while specificity from Fuzzy C means (77.573). However, in term of accuracy average, the Line Tracking method is more successful compared to the other methods
Trainable COSFIRE filters for vessel delineation with application to retinal images
Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se = 0.7655, Sp = 0.9704; STARE: Se = 0.7716, Sp = 0.9701; CHASE_DB1: Se = 0.7585, Sp = 0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods.peer-reviewe
Joint segmentation and classification of retinal arteries/veins from fundus images
Objective Automatic artery/vein (A/V) segmentation from fundus images is
required to track blood vessel changes occurring with many pathologies
including retinopathy and cardiovascular pathologies. One of the clinical
measures that quantifies vessel changes is the arterio-venous ratio (AVR) which
represents the ratio between artery and vein diameters. This measure
significantly depends on the accuracy of vessel segmentation and classification
into arteries and veins. This paper proposes a fast, novel method for semantic
A/V segmentation combining deep learning and graph propagation.
Methods A convolutional neural network (CNN) is proposed to jointly segment
and classify vessels into arteries and veins. The initial CNN labeling is
propagated through a graph representation of the retinal vasculature, whose
nodes are defined as the vessel branches and edges are weighted by the cost of
linking pairs of branches. To efficiently propagate the labels, the graph is
simplified into its minimum spanning tree.
Results The method achieves an accuracy of 94.8% for vessels segmentation.
The A/V classification achieves a specificity of 92.9% with a sensitivity of
93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and
sensitivity, both of 91.7%.
Conclusion The results show that our method outperforms the leading previous
works on a public dataset for A/V classification and is by far the fastest.
Significance The proposed global AVR calculated on the whole fundus image
using our automatic A/V segmentation method can better track vessel changes
associated to diabetic retinopathy than the standard local AVR calculated only
around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin
Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method
The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them
Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models
The health and function of tissue rely on its vasculature network to provide
reliable blood perfusion. Volumetric imaging approaches, such as multiphoton
microscopy, are able to generate detailed 3D images of blood vessels that could
contribute to our understanding of the role of vascular structure in normal
physiology and in disease mechanisms. The segmentation of vessels, a core image
analysis problem, is a bottleneck that has prevented the systematic comparison
of 3D vascular architecture across experimental populations. We explored the
use of convolutional neural networks to segment 3D vessels within volumetric in
vivo images acquired by multiphoton microscopy. We evaluated different network
architectures and machine learning techniques in the context of this
segmentation problem. We show that our optimized convolutional neural network
architecture, which we call DeepVess, yielded a segmentation accuracy that was
better than both the current state-of-the-art and a trained human annotator,
while also being orders of magnitude faster. To explore the effects of aging
and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of
cortical blood vessels in young and old mouse models of Alzheimer's disease and
wild type littermates. We found little difference in the distribution of
capillary diameter or tortuosity between these groups, but did note a decrease
in the number of longer capillary segments () in aged animals as
compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure
A ribbon of twins for extracting vessel boundaries
This paper presents an efficient model for automatic detection and extraction of blood vessels in ocular fundus images. The model is formed using a combination of the concept of ribbon snakes and twin snakes. On each edge, the twin concept is introduced by using two snakes, one inside and one outside the boundary. The ribbon concept integrates the pair of twins on the two vessel edges into a single ribbon. The twins maintain the consistency of the vessel width, particularly on very blurred, thin and noisy vessels. The model exhibits excellent performance in extracting the boundaries of vessels, with improved robustness compared to alternative models in the presence of occlusion, poor contrast or noise. Results are presented which demonstrate the performance of the discussed edge extraction method, and show a significant improvement compared to classical snake formulations
Extracting Tree-structures in CT data by Tracking Multiple Statistically Ranked Hypotheses
In this work, we adapt a method based on multiple hypothesis tracking (MHT)
that has been shown to give state-of-the-art vessel segmentation results in
interactive settings, for the purpose of extracting trees. Regularly spaced
tubular templates are fit to image data forming local hypotheses. These local
hypotheses are used to construct the MHT tree, which is then traversed to make
segmentation decisions. However, some critical parameters in this method are
scale-dependent and have an adverse effect when tracking structures of varying
dimensions. We propose to use statistical ranking of local hypotheses in
constructing the MHT tree, which yields a probabilistic interpretation of
scores across scales and helps alleviate the scale-dependence of MHT
parameters. This enables our method to track trees starting from a single seed
point. Our method is evaluated on chest CT data to extract airway trees and
coronary arteries. In both cases, we show that our method performs
significantly better than the original MHT method.Comment: Accepted for publication at the International Journal of Medical
Physics and Practic
- …