327 research outputs found

    Automated retinal analysis

    Get PDF
    Diabetes is a chronic disease affecting over 2% of the population in the UK [1]. Long-term complications of diabetes can affect many different systems of the body including the retina of the eye. In the retina, diabetes can lead to a disease called diabetic retinopathy, one of the leading causes of blindness in the working population of industrialised countries. The risk of visual loss from diabetic retinopathy can be reduced if treatment is given at the onset of sight-threatening retinopathy. To detect early indicators of the disease, the UK National Screening Committee have recommended that diabetic patients should receive annual screening by digital colour fundal photography [2]. Manually grading retinal images is a subjective and costly process requiring highly skilled staff. This thesis describes an automated diagnostic system based oil image processing and neural network techniques, which analyses digital fundus images so that early signs of sight threatening retinopathy can be identified. Within retinal analysis this research has concentrated on the development of four algorithms: optic nerve head segmentation, lesion segmentation, image quality assessment and vessel width measurements. This research amalgamated these four algorithms with two existing techniques to form an integrated diagnostic system. The diagnostic system when used as a 'pre-filtering' tool successfully reduced the number of images requiring human grading by 74.3%: this was achieved by identifying and excluding images without sight threatening maculopathy from manual screening

    Comprehensive retinal image analysis: image processing and feature extraction techniques oriented to the clinical task

    Get PDF
    Medical digital imaging has become a key element of modern health care procedures. It provides a visual documentation, a permanent record for the patients, and most importantly the ability to extract information about many diseases. Ophthalmology is a field that is heavily dependent on the analysis of digital images because they can aid in establishing an early diagnosis even before the first symptoms appear. This dissertation contributes to the digital analysis of such images and the problems that arise along the imaging pipeline, a field that is commonly referred to as retinal image analysis. We have dealt with and proposed solutions to problems that arise in retinal image acquisition and longitudinal monitoring of retinal disease evolution. Specifically, non-uniform illumination, poor image quality, automated focusing, and multichannel analysis. However, there are many unavoidable situations in which images of poor quality, like blurred retinal images because of aberrations in the eye, are acquired. To address this problem we have proposed two approaches for blind deconvolution of blurred retinal images. In the first approach, we consider the blur to be space-invariant and later in the second approach we extend the work and propose a more general space-variant scheme. For the development of the algorithms we have built preprocessing solutions that have enabled the extraction of retinal features of medical relevancy, like the segmentation of the optic disc and the detection and visualization of longitudinal structural changes in the retina. Encouraging experimental results carried out on real retinal images coming from the clinical setting demonstrate the applicability of our proposed solutions

    Recognition of Nonideal Iris Images Using Shape Guided Approach and Game Theory

    Get PDF
    Most state-of-the-art iris recognition algorithms claim to perform with a very high recognition accuracy in a strictly controlled environment. However, their recognition accuracies significantly decrease when the acquired images are affected by different noise factors including motion blur, camera diffusion, head movement, gaze direction, camera angle, reflections, contrast, luminosity, eyelid and eyelash occlusions, and problems due to contraction and dilation. The main objective of this thesis is to develop a nonideal iris recognition system by using active contour methods, Genetic Algorithms (GAs), shape guided model, Adaptive Asymmetrical Support Vector Machines (AASVMs) and Game Theory (GT). In this thesis, the proposed iris recognition method is divided into two phases: (1) cooperative iris recognition, and (2) noncooperative iris recognition. While most state-of-the-art iris recognition algorithms have focused on the preprocessing of iris images, recently, important new directions have been identified in iris biometrics research. These include optimal feature selection and iris pattern classification. In the first phase, we propose an iris recognition scheme based on GAs and asymmetrical SVMs. Instead of using the whole iris region, we elicit the iris information between the collarette and the pupil boundary to suppress the effects of eyelid and eyelash occlusions and to minimize the matching error. In the second phase, we process the nonideal iris images that are captured in unconstrained situations and those affected by several nonideal factors. The proposed noncooperative iris recognition method is further divided into three approaches. In the first approach of the second phase, we apply active contour-based curve evolution approaches to segment the inner/outer boundaries accurately from the nonideal iris images. The proposed active contour-based approaches show a reasonable performance when the iris/sclera boundary is separated by a blurred boundary. In the second approach, we describe a new iris segmentation scheme using GT to elicit iris/pupil boundary from a nonideal iris image. We apply a parallel game-theoretic decision making procedure by modifying Chakraborty and Duncan's algorithm to form a unified approach, which is robust to noise and poor localization and less affected by weak iris/sclera boundary. Finally, to further improve the segmentation performance, we propose a variational model to localize the iris region belonging to the given shape space using active contour method, a geometric shape prior and the Mumford-Shah functional. The verification and identification performance of the proposed scheme is validated using four challenging nonideal iris datasets, namely, the ICE 2005, the UBIRIS Version 1, the CASIA Version 3 Interval, and the WVU Nonideal, plus the non-homogeneous combined dataset. We have conducted several sets of experiments and finally, the proposed approach has achieved a Genuine Accept Rate (GAR) of 97.34% on the combined dataset at the fixed False Accept Rate (FAR) of 0.001% with an Equal Error Rate (EER) of 0.81%. The highest Correct Recognition Rate (CRR) obtained by the proposed iris recognition system is 97.39%

    Improved facial feature fitting for model based coding and animation

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Iris Recognition: Robust Processing, Synthesis, Performance Evaluation and Applications

    Get PDF
    The popularity of iris biometric has grown considerably over the past few years. It has resulted in the development of a large number of new iris processing and encoding algorithms. In this dissertation, we will discuss the following aspects of the iris recognition problem: iris image acquisition, iris quality, iris segmentation, iris encoding, performance enhancement and two novel applications.;The specific claimed novelties of this dissertation include: (1) a method to generate a large scale realistic database of iris images; (2) a crosspectral iris matching method for comparison of images in color range against images in Near-Infrared (NIR) range; (3) a method to evaluate iris image and video quality; (4) a robust quality-based iris segmentation method; (5) several approaches to enhance recognition performance and security of traditional iris encoding techniques; (6) a method to increase iris capture volume for acquisition of iris on the move from a distance and (7) a method to improve performance of biometric systems due to available soft data in the form of links and connections in a relevant social network

    Retinal vessel segmentation using textons

    Get PDF
    Segmenting vessels from retinal images, like segmentation in many other medical image domains, is a challenging task, as there is no unified way that can be adopted to extract the vessels accurately. However, it is the most critical stage in automatic assessment of various forms of diseases (e.g. Glaucoma, Age-related macular degeneration, diabetic retinopathy and cardiovascular diseases etc.). Our research aims to investigate retinal image segmentation approaches based on textons as they provide a compact description of texture that can be learnt from a training set. This thesis presents a brief review of those diseases and also includes their current situations, future trends and techniques used for their automatic diagnosis in routine clinical applications. The importance of retinal vessel segmentation is particularly emphasized in such applications. An extensive review of previous work on retinal vessel segmentation and salient texture analysis methods is presented. Five automatic retinal vessel segmentation methods are proposed in this thesis. The first method focuses on addressing the problem of removing pathological anomalies (Drusen, exudates) for retinal vessel segmentation, which have been identified by other researchers as a problem and a common source of error. The results show that the modified method shows some improvement compared to a previously published method. The second novel supervised segmentation method employs textons. We propose a new filter bank (MR11) that includes bar detectors for vascular feature extraction and other kernels to detect edges and photometric variations in the image. The k-means clustering algorithm is adopted for texton generation based on the vessel and non-vessel elements which are identified by ground truth. The third improved supervised method is developed based on the second one, in which textons are generated by k-means clustering and texton maps representing vessels are derived by back projecting pixel clusters onto hand labelled ground truth. A further step is implemented to ensure that the best combinations of textons are represented in the map and subsequently used to identify vessels in the test set. The experimental results on two benchmark datasets show that our proposed method performs well compared to other published work and the results of human experts. A further test of our system on an independent set of optical fundus images verified its consistent performance. The statistical analysis on experimental results also reveals that it is possible to train unified textons for retinal vessel segmentation. In the fourth method a novel scheme using Gabor filter bank for vessel feature extraction is proposed. The ii method is inspired by the human visual system. Machine learning is used to optimize the Gabor filter parameters. The experimental results demonstrate that our method significantly enhances the true positive rate while maintaining a level of specificity that is comparable with other approaches. Finally, we proposed a new unsupervised texton based retinal vessel segmentation method using derivative of SIFT and multi-scale Gabor filers. The lack of sufficient quantities of hand labelled ground truth and the high level of variability in ground truth labels amongst experts provides the motivation for this approach. The evaluation results reveal that our unsupervised segmentation method is comparable with the best other supervised methods and other best state of the art methods

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications
    • …
    corecore