Automatic extraction of retinal features to assist diagnosis of glaucoma disease

Abstract

Glaucoma is a group of eye diseases that have common traits such as high eye pressure, damage to the Optic Nerve Head (ONH) and gradual vision loss. It affects the peripheral vision and eventually leads to blindness if left untreated. The current common methods of diagnosis of glaucoma are performed manually by the clinicians. Clinicians perform manual image operations such as change of contrast, zooming in zooming out etc to observe glaucoma related clinical indications. This type of diagnostic process is time consuming and subjective. With the advancement of image and vision computing, by automating steps in the diagnostic process, more patients can be screened and early treatment can be provided to prevent any or further loss of vision. The aim of this work is to develop a system called Glaucoma Detection Framework (GDF), which can automatically determine changes in retinal structures and imagebased pattern associated with glaucoma so as to assist the eye clinicians for glaucoma diagnosis in a timely and effective manner. In this work, several major contributions have been made towards the development of the automatic GDF consisting of the stages of preprocessing, optic disc and cup segmentation and regional image feature methods for classification between glaucoma and normal images. Firstly, in the preprocessing step, a retinal area detector based on superpixel classification model has been developed in order to automatically determine true retinal area from a Scanning Laser Ophthalmoscope (SLO) image. The retinal area detector can automatically extract artefacts out from the SLO image while preserving the computational effciency and avoiding over-segmentation of the artefacts. Localization of the ONH is one of the important steps towards the glaucoma analysis. A new weighted feature map approach has been proposed, which can enhance the region of ONH for accurate localization. For determining vasculature shift, which is one of glaucoma indications, we proposed the ONH cropped image based vasculature classification model to segment out the vasculature from the ONH cropped image. The ONH cropped image based vasculature classification model is developed in order to avoid misidentification of optic disc boundary and Peripapillary Atrophy (PPA) around the ONH of being a part of the vasculature area. Secondly, for automatic determination of optic disc and optic cup boundaries, a Point Edge Model (PEM), a Weighted Point Edge Model (WPEM) and a Region Classification Model (RCM) have been proposed. The RCM initially determines the optic disc region using the set of feature maps most suitable for the region classification whereas the PEM updates the contour using the force field of the feature maps with strong edge profile. The combination of PEM and RCM entitled Point Edge and Region Classification Model (PERCM) has significantly increased the accuracy of optic disc segmentation with respect to clinical annotations around optic disc. On the other hand, the WPEM determines the force field using the weighted feature maps calculated by the RCM for optic cup in order to enhance the optic cup region compared to rim area in the ONH. The combination of WPEM and RCM entitled Weighted Point Edge and Region Classification Model (WPERCM) can significantly enhance the accuracy of optic cup segmentation. Thirdly, this work proposes a Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from the existing methods focusing on global features information only, our approach after optic disc localization and segmentation can automatically divide an image into five regions (i.e. optic disc or Optic Nerve Head (ONH) area, inferior (I), superior(S), nasal(N) and temporal(T)). These regions are usually used for diagnosis of glaucoma by clinicians through visual observation only. It then extracts image-based information such as textural, spatial and frequency based information so as to distinguish between normal and glaucoma images. The method provides a new way to identify glaucoma symptoms without determining any geometrical measurement associated with clinical indications glaucoma. Finally, we have accommodated clinical indications of glaucoma including the CDR, vasculature shift and neuroretinal rim loss with the RIFM classification and performed automatic classification between normal and glaucoma images. Since based on the clinical literature, no geometrical measurement is the guaranteed sign of glaucoma, the accommodation of the RIFM classification results with clinical indications of glaucoma can lead to more accurate classification between normal and glaucoma images. The proposed methods in this work have been tested against retinal image databases of 208 fundus images and 102 Scanning Laser Ophthalmoscope (SLO) images. These databases have been annotated by the clinicians around different anatomical structures associated with glaucoma as well as annotated with healthy or glaucomatous images. In fundus images, ONH cropped images have resolution varying from 300 to 900 whereas in SLO images, the resolution is 341 x 341. The accuracy of classification between normal and glaucoma images on fundus images and the SLO images is 94.93% and 98.03% respectively

    Similar works