Abstract

This paper presents a retinal vessel segmentation algorithm which uses a texton dictionary to classify vessel/non-vessel pixels. However, in contrast to previous work where filter parameters are learnt from manually labelled image pixels our filter parameters are derived from a smaller set of image features that we call keypoints. A Gabor filter bank, parameterised empirically by ROC analysis, is used to extract keypoints representing significant scale specific vessel features using an approach inspired by the SIFT algorithm. We first determine keypoints using a validation set and then derive seeds from these points to initialise a k-means clustering algorithm which builds a texton dictionary from another training set. During testing we use a simple 1-NN classifier to identify vessel/non-vessel pixels and evaluate our system using the DRIVE database. We achieve average values of sensitivity, specificity and accuracy of 78.12%, 96.68% and 95.05% respectively. We find that clusters of filter responses from keypoints are more robust than those derived from hand-labelled pixels. This, in turn yields textons more representative of vessel/non-vessel classes and mitigates problems arising due to intra and inter-observer variability

    Similar works