154 research outputs found

    Retinal area detector from Scanning Laser Ophthalmoscope (SLO) images for diagnosing retinal diseases

    Get PDF
    © 2014 IEEE. Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide field of view, which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%

    Retinal Area Segmentation using Adaptive Superpixalation and its Classification using RBFN

    Get PDF
    Retinal disease is the very important issue in medical field. To diagnose the disease, it needs to detect the true retinal area. Artefacts like eyelids and eyelashes are come along with retinal part so removal of artefacts is the big task for better diagnosis of disease into the retinal part.  In this paper, we have proposed the segmentation and use machine learning approaches to detect the true retinal part. Preprocessing is done on the original image using Gamma Normalization which helps to enhance the image  that can gives detail information about the image. Then the segmentation is performed on the Gamma Normalized image by Superpixel method. Superpixel is the group of pixel into different regions which is based on compactness and regional size. Superpixel is used to reduce the complexity of image processing task and provide suitable primitive image pattern. Then feature generation must be done and machine learning approach helps to extract true retinal area. The experimental evaluation gives the better result with accuracy of 96%

    Personal Identification Based on Live Iris Image Analysis

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Automatic delimitation of the clinical region of interest in ultra-wide field of view images of the retina

    Get PDF
    Retinal ultra-wide field of view images (fundus images) provides the visu-alization of a large part of the retina though, artifacts may appear in those images. Eyelashes and eyelids often cover the clinical region of interest and worse, eye-lashes can be mistaken with arteries and/or veins when those images are put through automatic diagnosis or segmentation software creating, in those cases, the appearance of false positives results. Correcting this problem, the first step in the development of qualified auto-matic diseases diagnosis programs can be done and in that way the development of an objective tool to assess diseases eradicating the human error from those processes can also be achieved. In this work the development of a tool that automatically delimitates the clinical region of interest is proposed by retrieving features from the images that will be analyzed by an automatic classifier. This automatic classifier will evaluate the information and will decide which part of the image is of interest and which part contains artifacts. The results were validated by implementing a software in C# language and validated through a statistical analysis. From those results it was confirmed that the methodology presented is capable of detecting artifacts and selecting the clin-ical region of interest in fundus images of the retina

    Techniques for Ocular Biometric Recognition Under Non-ideal Conditions

    Get PDF
    The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments

    Automated Image Quality Assessment for Anterior Segment Optical Coherence Tomograph

    Get PDF
    Optical Coherence Tomography (OCT) is a technique for diagnosing eye disorders. Image quality assessment (IQA) of OCT images is essential, but manual IQA is time consuming and subjective. Recently, automated IQA methods based on deep learning (DL) have achieved good performance. However, few of these methods focus on OCT images of the anterior segment of the eye (AS-OCT). Moreover, few of these methods identify the factors that affect the quality of the images (called "quality factors" in this paper). This could adversely affect the acceptance of their results. In this study, we define, for the first time to the best of our knowledge, the quality level and four quality factors of AS-OCT for the clinical context of anterior chamber inflammation. We also develop an automated framework based on multi-task learning to assess the quality and to identify the existing of quality factors in the AS-OCT images. The effectiveness of the framework is demonstrated in experiments

    Motion Segmentation from Clustering of Sparse Point Features Using Spatially Constrained Mixture Models

    Get PDF
    Motion is one of the strongest cues available for segmentation. While motion segmentation finds wide ranging applications in object detection, tracking, surveillance, robotics, image and video compression, scene reconstruction, video editing, and so on, it faces various challenges such as accurate motion recovery from noisy data, varying complexity of the models required to describe the computed image motion, the dynamic nature of the scene that may include a large number of independently moving objects undergoing occlusions, and the need to make high-level decisions while dealing with long image sequences. Keeping the sparse point features as the pivotal point, this thesis presents three distinct approaches that address some of the above mentioned motion segmentation challenges. The first part deals with the detection and tracking of sparse point features in image sequences. A framework is proposed where point features can be tracked jointly. Traditionally, sparse features have been tracked independently of one another. Combining the ideas from Lucas-Kanade and Horn-Schunck, this thesis presents a technique in which the estimated motion of a feature is influenced by the motion of the neighboring features. The joint feature tracking algorithm leads to an improved tracking performance over the standard Lucas-Kanade based tracking approach, especially while tracking features in untextured regions. The second part is related to motion segmentation using sparse point feature trajectories. The approach utilizes a spatially constrained mixture model framework and a greedy EM algorithm to group point features. In contrast to previous work, the algorithm is incremental in nature and allows for an arbitrary number of objects traveling at different relative speeds to be segmented, thus eliminating the need for an explicit initialization of the number of groups. The primary parameter used by the algorithm is the amount of evidence that must be accumulated before the features are grouped. A statistical goodness-of-fit test monitors the change in the motion parameters of a group over time in order to automatically update the reference frame. The approach works in real time and is able to segment various challenging sequences captured from still and moving cameras that contain multiple independently moving objects and motion blur. The third part of this thesis deals with the use of specialized models for motion segmentation. The articulated human motion is chosen as a representative example that requires a complex model to be accurately described. A motion-based approach for segmentation, tracking, and pose estimation of articulated bodies is presented. The human body is represented using the trajectories of a number of sparse points. A novel motion descriptor encodes the spatial relationships of the motion vectors representing various parts of the person and can discriminate between articulated and non-articulated motions, as well as between various pose and view angles. Furthermore, a nearest neighbor search for the closest motion descriptor from the labeled training data consisting of the human gait cycle in multiple views is performed, and this distance is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. Experimental results on various sequences of walking subjects with multiple viewpoints and scale demonstrate the effectiveness of the approach. In particular, the purely motion based approach is able to track people in night-time sequences, even when the appearance based cues are not available. Finally, an application of image segmentation is presented in the context of iris segmentation. Iris is a widely used biometric for recognition and is known to be highly accurate if the segmentation of the iris region is near perfect. Non-ideal situations arise when the iris undergoes occlusion by eyelashes or eyelids, or the overall quality of the segmented iris is affected by illumination changes, or due to out-of-plane rotation of the eye. The proposed iris segmentation approach combines the appearance and the geometry of the eye to segment iris regions from non-ideal images. The image is modeled as a Markov random field, and a graph cuts based energy minimization algorithm is applied to label the pixels either as eyelashes, pupil, iris, or background using texture and image intensity information. The iris shape is modeled as an ellipse and is used to refine the pixel based segmentation. The results indicate the effectiveness of the segmentation algorithm in handling non-ideal iris images

    Recognition of Nonideal Iris Images Using Shape Guided Approach and Game Theory

    Get PDF
    Most state-of-the-art iris recognition algorithms claim to perform with a very high recognition accuracy in a strictly controlled environment. However, their recognition accuracies significantly decrease when the acquired images are affected by different noise factors including motion blur, camera diffusion, head movement, gaze direction, camera angle, reflections, contrast, luminosity, eyelid and eyelash occlusions, and problems due to contraction and dilation. The main objective of this thesis is to develop a nonideal iris recognition system by using active contour methods, Genetic Algorithms (GAs), shape guided model, Adaptive Asymmetrical Support Vector Machines (AASVMs) and Game Theory (GT). In this thesis, the proposed iris recognition method is divided into two phases: (1) cooperative iris recognition, and (2) noncooperative iris recognition. While most state-of-the-art iris recognition algorithms have focused on the preprocessing of iris images, recently, important new directions have been identified in iris biometrics research. These include optimal feature selection and iris pattern classification. In the first phase, we propose an iris recognition scheme based on GAs and asymmetrical SVMs. Instead of using the whole iris region, we elicit the iris information between the collarette and the pupil boundary to suppress the effects of eyelid and eyelash occlusions and to minimize the matching error. In the second phase, we process the nonideal iris images that are captured in unconstrained situations and those affected by several nonideal factors. The proposed noncooperative iris recognition method is further divided into three approaches. In the first approach of the second phase, we apply active contour-based curve evolution approaches to segment the inner/outer boundaries accurately from the nonideal iris images. The proposed active contour-based approaches show a reasonable performance when the iris/sclera boundary is separated by a blurred boundary. In the second approach, we describe a new iris segmentation scheme using GT to elicit iris/pupil boundary from a nonideal iris image. We apply a parallel game-theoretic decision making procedure by modifying Chakraborty and Duncan's algorithm to form a unified approach, which is robust to noise and poor localization and less affected by weak iris/sclera boundary. Finally, to further improve the segmentation performance, we propose a variational model to localize the iris region belonging to the given shape space using active contour method, a geometric shape prior and the Mumford-Shah functional. The verification and identification performance of the proposed scheme is validated using four challenging nonideal iris datasets, namely, the ICE 2005, the UBIRIS Version 1, the CASIA Version 3 Interval, and the WVU Nonideal, plus the non-homogeneous combined dataset. We have conducted several sets of experiments and finally, the proposed approach has achieved a Genuine Accept Rate (GAR) of 97.34% on the combined dataset at the fixed False Accept Rate (FAR) of 0.001% with an Equal Error Rate (EER) of 0.81%. The highest Correct Recognition Rate (CRR) obtained by the proposed iris recognition system is 97.39%
    corecore