669 research outputs found

    A robust sclera segmentation algorithm

    Get PDF
    Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%

    A Review: Person Identification using Retinal Fundus Images

    Get PDF
    In this paper a review on biometric person identification has been discussed using features from retinal fundus image. Retina recognition is claimed to be the best person identification method among the biometric recognition systems as the retina is practically impossible to forge. It is found to be most stable, reliable and most secure among all other biometric systems. Retina inherits the property of uniqueness and stability. The features used in the recognition process are either blood vessel features or non-blood vessel features. But the vascular pattern is the most prominent feature utilized by most of the researchers for retina based person identification. Processes involved in this authentication system include pre-processing, feature extraction and feature matching. Bifurcation and crossover points are widely used features among the blood vessel features. Non-blood vessel features include luminance, contrast, and corner points etc. This paper summarizes and compares the different retina based authentication system. Researchers have used publicly available databases such as DRIVE, STARE, VARIA, RIDB, ARIA, AFIO, DRIDB, and SiMES for testing their methods. Various quantitative measures such as accuracy, recognition rate, false rejection rate, false acceptance rate, and equal error rate are used to evaluate the performance of different algorithms. DRIVE database provides 100\% recognition for most of the methods. Rest of the database the accuracy of recognition is more than 90\%

    Iris recognition method based on segmentation

    Get PDF
    The development of science and studies has led to the creation of many modern means and technologies that focused and directed their interests on enhancing security due to the increased need for high degrees of security and protection for individuals and societies. Hence identification using a person's vital characteristics is an important privacy topic for governments, businesses and individuals. A lot of biometric features such as fingerprint, facial measurements, acid, palm, gait, fingernails and iris have been studied and used among all the biometrics, in particular, the iris gets the attention because it has unique advantages as the iris pattern is unique and does not change over time, providing the required accuracy and stability in verification systems. This feature is impossible to modify without risk. When identifying with the iris of the eye, the discrimination system only needs to compare the data of the characteristics of the iris of the person to be tested to determine the individual's identity, so the iris is extracted only from the images taken. Determining correct iris segmentation methods is the most important stage in the verification system, including determining the limbic boundaries of the iris and pupil, whether there is an effect of eyelids and shadows, and not exaggerating centralization that reduces the effectiveness of the iris recognition system. There are many techniques for subtracting the iris from the captured image. This paper presents the architecture of biometric systems that use iris to distinguish people and a recent survey of iris segmentation methods used in recent research, discusses methods and algorithms used for this purpose, presents datasets and the accuracy of each method, and compares the performance of each method used in previous studie

    Iris Region and Bayes Classifier for Robust Open or Closed Eye Detection

    Get PDF
    AbstractThis paper presents a robust method to detect sequence of state open or closed of eye in low-resolution image which can finally lead to efficient eye blink detection for practical use. Eye states and eye blink detection play an important role in human-computer interaction (HCI) systems. Eye blinks can be used as communication method for people with severe disability providing an alternate input modality to control a computer or as detection method for a driver’s drowsiness. The proposed approach is based on an analysis of eye and skin in eye region image. Evidently, the iris and sclera regions increase as a person opens an eye and decrease while an eye is closing. In particular, the distributions of these eye components, during each eye state, form a bell-like shape. By using color tone differences, the iris and sclera regions can be extracted from the skin. Next, a naive Bayes classifier effectively classifies the eye states. Further, a study also shows that iris region as a feature gives better detection rate over sclera region as a feature. The approach works online with low-resolution image and in typical lighting conditions. It was successfully tested in  image sequences (  frames) and achieved high accuracy of over  for open eye and over  for closed eye compared to the ground truth. In particular, it improves almost  in terms of open eye state detection compared to a recent commonly used approach, template matching algorithm

    Computer Vision Based Early Intraocular Pressure Assessment From Frontal Eye Images

    Get PDF
    Intraocular Pressure (IOP) in general, refers to the pressure in the eyes. Gradual increase of IOP and high IOP are conditions or symptoms that may lead to certain diseases such as glaucoma, and therefore, must be closely monitored. While the pressure in the eye increases, different parts of the eye may become affected until the eye parts are damaged. An effective way to prevent rise in eye pressure is by early detection. Exiting IOP monitoring tools include eye tests at clinical facilities and computer-aided techniques from fundus and optic nerves images. In this work, a new computer vision-based smart healthcare framework is presented to evaluate the intraocular pressure risk from frontal eye images early-on. The framework determines the status of IOP by analyzing frontal eye images using image processing and machine learning techniques. A database of images from the Princess Basma Hospital was used in this work. The database contains 400 eye images; 200 images with normal IOP and 200 high eye pressure case images. This study proposes novel features for IOP determination from two experiments. The first experiment extracts the sclera using circular hough transform, after which four features are extracted from the whole sclera. These features are mean redness level, red area percentage, contour area and contour height. The pupil/iris diameter ratio feature is also extracted from the frontal eye image after a series of pre-processing techniques. The second experiment extracts the sclera and iris segment using a fully conventional neural network technique, after which six features are extracted from only part of the segmented sclera and iris. The features include mean redness level, red area percentage, contour area, contour distance and contour angle along with the pupil/iris diameter ratio. Once the features are extracted, classification techniques are applied in order to train and test the images and features to obtain the status of the patients in terms of eye pressure. For the first experiment, neural network and support vector machine algorithms were adopted in order to detect the status of intraocular pressure. The second experiment adopted support vector machine and decision tree algorithms to detect the status of intraocular pressure. For both experiments, the framework detects the status of IOP (normal or high IOP) with high accuracies. This computer vison-based approach produces evidence of the relationship between the extracted frontal eye image features and IOP, which has not been previously investigated through automated image processing and machine learning techniques from frontal eye images
    corecore