23 research outputs found

    Retinal vessel segmentation using Gabor Filter and Textons

    Get PDF
    This paper presents a retinal vessel segmentation method that is inspired by the human visual system and uses a Gabor filter bank. Machine learning is used to optimize the filter parameters for retinal vessel extraction. The filter responses are represented as textons and this allows the corresponding membership functions to be used as the framework for learning vessel and non-vessel classes. Then, vessel texton memberships are used to generate segmentation results. We evaluate our method using the publicly available DRIVE database. It achieves competitive performance (sensitivity=0.7673, specificity=0.9602, accuracy=0.9430) compared to other recently published work. These figures are particularly interesting as our filter bank is quite generic and only includes Gabor responses. Our experimental results also show that the performance, in terms of sensitivity, is superior to other methods

    Retina vessel width estimation using bifurcation points to track vessels

    Get PDF
    Peer reviewedPostprin

    Study of the retinal vascular changes in the transition from diabetic to diabetic retinopathy eye

    Get PDF
    An attempt to investigate the vascular changes at the transition from R0 to R1 (non-retinopathy to first stage of retinopathy) is made in this article. Thirty images from the right eye of fifteen patients were used (one at the year before retinopathy and one after) and width measurements were taken from six large vessel segments in junctions (three from arteries and three from veins)

    The Fourth Biometric - Vein Recognition

    Get PDF

    Robust methodology for fractal analysis of the retinal vasculature

    Get PDF
    We have developed a robust method to perform retinal vascular fractal analysis from digital retina images. The technique preprocesses the green channel retina images with Gabor wavelet transforms to enhance the retinal images. Fourier Fractal dimension is computed on these preprocessed images and does not require any segmentation of the vessels. This novel technique requires human input only at a single step; the allocation of the optic disk center. We have tested this technique on 380 retina images from healthy individuals aged 50+ years, randomly selected from the Blue Mountains Eye Study population. To assess its reliability in assessing retinal vascular fractals from different allocation of optic center, we performed pair-wise Pearson correlation between the fractal dimension estimates with 100 simulated region of interest for each of the 380 images. There was Gaussian distribution variation in the optic center allocation in each simulation. The resulting mean correlation coefficient (standard deviation) was 0.93 (0.005). The repeatability of this method was found to be better than the earlier box-counting method. Using this method to assess retinal vascular fractals, we have also confirmed a reduction in the retinal vasculature complexity with aging, consistent with observations from other human organ systems

    Trainable COSFIRE filters for vessel delineation with application to retinal images

    Get PDF
    Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se = 0.7655, Sp = 0.9704; STARE: Se = 0.7716, Sp = 0.9701; CHASE_DB1: Se = 0.7585, Sp = 0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods.peer-reviewe

    Vessel labeling in combined confocal scanning laser ophthalmoscopy and optical coherence tomography Images : criteria for blood vessel discrimination

    Get PDF
    INTRODUCTION: The diagnostic potential of optical coherence tomography (OCT) in neurological diseases is intensively discussed. Besides the sectional view of the retina, modern OCT scanners produce a simultaneous top-view confocal scanning laser ophthalmoscopy (cSLO) image including the option to evaluate retinal vessels. A correct discrimination between arteries and veins (labeling) is vital for detecting vascular differences between healthy subjects and patients. Up to now, criteria for labeling (cSLO) images generated by OCT scanners do not exist. OBJECTIVE: This study reviewed labeling criteria originally developed for color fundus photography (CFP) images. METHODS: The criteria were modified to reflect the cSLO technique, followed by development of a protocol for labeling blood vessels. These criteria were based on main aspects such as central light reflex, brightness, and vessel thickness, as well as on some additional criteria such as vascular crossing patterns and the context of the vessel tree. RESULTS AND CONCLUSION: They demonstrated excellent inter-rater agreement and validity, which seems to indicate that labeling of images might no longer require more than one rater. This algorithm extends the diagnostic possibilities offered by OCT investigations

    A magasvérnyomás-betegség tüneteinek automatikus felismerése retinaképeken

    Get PDF
    A dolgozat az inputként kapott retinaképeken a magasvérnyomás-betegség jellemző kezdeti tüneteinek automatikus felismeréséről szól. Ehhez először számbaveszi a már létező módszereket az érhálózat szegmentálásához, majd bemutat egy lehetséges módszert részletesebben. Az így kapott bináris állományon a künönböző tünetek lehetséges felismerési módszereivel foglalkozik.B

    Retinal Vessel Centerline Extraction Using Multiscale Matched Filters, Confidence and Edge Measures

    Full text link

    Computer Vision Based Early Intraocular Pressure Assessment From Frontal Eye Images

    Get PDF
    Intraocular Pressure (IOP) in general, refers to the pressure in the eyes. Gradual increase of IOP and high IOP are conditions or symptoms that may lead to certain diseases such as glaucoma, and therefore, must be closely monitored. While the pressure in the eye increases, different parts of the eye may become affected until the eye parts are damaged. An effective way to prevent rise in eye pressure is by early detection. Exiting IOP monitoring tools include eye tests at clinical facilities and computer-aided techniques from fundus and optic nerves images. In this work, a new computer vision-based smart healthcare framework is presented to evaluate the intraocular pressure risk from frontal eye images early-on. The framework determines the status of IOP by analyzing frontal eye images using image processing and machine learning techniques. A database of images from the Princess Basma Hospital was used in this work. The database contains 400 eye images; 200 images with normal IOP and 200 high eye pressure case images. This study proposes novel features for IOP determination from two experiments. The first experiment extracts the sclera using circular hough transform, after which four features are extracted from the whole sclera. These features are mean redness level, red area percentage, contour area and contour height. The pupil/iris diameter ratio feature is also extracted from the frontal eye image after a series of pre-processing techniques. The second experiment extracts the sclera and iris segment using a fully conventional neural network technique, after which six features are extracted from only part of the segmented sclera and iris. The features include mean redness level, red area percentage, contour area, contour distance and contour angle along with the pupil/iris diameter ratio. Once the features are extracted, classification techniques are applied in order to train and test the images and features to obtain the status of the patients in terms of eye pressure. For the first experiment, neural network and support vector machine algorithms were adopted in order to detect the status of intraocular pressure. The second experiment adopted support vector machine and decision tree algorithms to detect the status of intraocular pressure. For both experiments, the framework detects the status of IOP (normal or high IOP) with high accuracies. This computer vison-based approach produces evidence of the relationship between the extracted frontal eye image features and IOP, which has not been previously investigated through automated image processing and machine learning techniques from frontal eye images
    corecore