138 research outputs found

    Image enhancement technique at different distance for Iris recognition

    Get PDF
    Capturing eye images within visible wavelength illumination in the non-cooperative environment lead to the low quality of eye images. Thus, this study is motivated to investigate the effectiveness of image enhancement technique that able to solve the abovementioned issue. A comparative study has been conducted in which three image enhancement techniques namely Histogram Equalization (HE), Adaptive Histogram Equalization (AHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) were evaluated and analysed. UBIRIS.v2 eye image database was used as a dataset to evaluate those techniques. Moreover, each of enhancement techniques was tested against the different distance of eye image captured. Results were compared in terms of image interpretation by using Peak-Signal Noise Ratio (PSNR), Absolute Mean Brightness Error (AMBE) and Mean Absolute Error (MAE). The effectiveness of the enhancement techniques on the different distance of image captured was evaluated using the False Acceptance Rate (FAR) and False Rejection Rate (FRR). As a result, CLAHE has proven to be the most reliable technique in enhancing the eye image which improved the localization accuracy by 7%. In addition, the results showed that by implementing CLAHE technique at a four-meter distance was an ideal distance to capture eye images in a non-cooperative environment where it provides high recognition accuracy, 74%

    Fusion Iris and Periocular Recognitions in Non-Cooperative Environment

    Get PDF
    The performance of iris recognition in non-cooperative environment can be negatively impacted when the resolution of the iris images is low which results in failure to determine the eye center, limbic and pupillary boundary of the iris segmentation. Hence, a combination with periocular features is suggested to increase the authenticity of the recognition system. However, the texture feature of periocular can be easily affected by a background complication while the colour feature of periocular is still limited to spatial information and quantization effects. This happens due to different distances between the sensor and the subject during the iris acquisition stage as well as image size and orientation. The proposed method of periocular feature extraction consists of a combination of rotation invariant uniform local binary pattern to select the texture features and a method of color moment to select the color features. Besides, a hue-saturation-value channel is selected to avoid loss of discriminative information in the eye image. The proposed method which consists of combination between texture and colour features provides the highest accuracy for the periocular recognition with more than 71.5% for the UBIRIS.v2 dataset and 85.7% for the UBIPr dataset. For the fusion recognitions, the proposed method achieved the highest accuracy with more than 85.9% for the UBIRIS.v2 dataset and 89.7% for the UBIPr dataset

    Intensity Adjustment and Noise Removal for Medical Image Enhancement

    Get PDF
    Introduction: Image contrast enhancement is an image processing method in which the output image has high quality display. Medical images have prominent role in modern diagnosis; therefore, this study aimed to enhance the quality of medical images in order to help radiologists and surgeons in finding abnormal areas. Method: The methods used in this study to enhance medical images quality are categorized into two groups; intensity adjustment and noise removal. Intensity adjustment methods including techniques for mapping image intensity values to the new domain. The second group including methods to remove noise from the images. Medical images used in this study including images of spine, brain, lung and breast. Results: The results were analyzed based on five criteria including the number of detected edges, PCNR, Image Quality Index, AMBE and visual quality that the number of detected edges in images of spine, brain, lungs and breast were 6465, 10305, 16266 and 13509, respectively. Conclusion: The results show that the methods with intensity adjustment technique have better performance in criteria such as the number of detected edges and image visual assessment. However, the other method include in noise removal technique perform more effectively in PCNR, Image Quality Index and AMBE measure

    Automated Optical Inspection and Image Analysis of Superconducting Radio-Frequency Cavities

    Full text link
    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97% and the PPV 99% within the resolution of 15.63 μm\mu \mathrm{m}. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ=0.93\rho = -0.93 with a significance of 6σ6\,\sigma between an obtained surface variable and the maximal accelerating field was found

    A pilot study on discriminative power of features of superficial venous pattern in the hand

    Get PDF
    The goal of the project is to develop an automatic way to identify, represent the superficial vasculature of the back hand and investigate its discriminative power as biometric feature. A prototype of a system that extracts the superficial venous pattern of infrared images of back hands will be described. Enhancement algorithms are used to solve the lack of contrast of the infrared images. To trace the veins, a vessel tracking technique is applied, obtaining binary masks of the superficial venous tree. Successively, a method to estimate the blood vessels calibre, length, the location and angles of vessel junctions, will be presented. The discriminative power of these features will be studied, independently and simultaneously, considering two features vector. Pattern matching of two vasculature maps will be performed, to investigate the uniqueness of the vessel network / L’obiettivo del progetto è di sviluppare un metodo automatico per identificare e rappresentare la rete vascolare superficiale presente nel dorso della mano ed investigare sul suo potere discriminativo come caratteristica biometrica. Un prototipo di sistema che estrae l’albero superficiale delle vene da immagini infrarosse del dorso della mano sarà descritto. Algoritmi per il miglioramento del contrasto delle immagini infrarosse saranno applicati. Per tracciare le vene, una tecnica di tracking verrà utilizzata per ottenere una maschera binaria della rete vascolare. Successivamente, un metodo per stimare il calibro e la lunghezza dei vasi sanguigni, la posizione e gli angoli delle giunzioni sarà trattato. Il potere discriminativo delle precedenti caratteristiche verrà studiato ed una tecnica di pattern matching di due modelli vascolari sarà presentata per verificare l’unicità di quest

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    Wrist vascular biometric recognition using a portable contactless system

    Get PDF
    Human wrist vein biometric recognition is one of the least used vascular biometric modalities. Nevertheless, it has similar usability and is as safe as the two most common vascular variants in the commercial and research worlds: hand palm vein and finger vein modalities. Besides, the wrist vein variant, with wider veins, provides a clearer and better visualization and definition of the unique vein patterns. In this paper, a novel vein wrist non-contact system has been designed, implemented, and tested. For this purpose, a new contactless database has been collected with the software algorithm TGS-CVBR®. The database, called UC3M-CV1, consists of 1200 near-infrared contactless images of 100 different users, collected in two separate sessions, from the wrists of 50 subjects (25 females and 25 males). Environmental light conditions for the different subjects and sessions have been not controlled: different daytimes and different places (outdoor/indoor). The software algorithm created for the recognition task is PIS-CVBR®. The results obtained by combining these three elements, TGS-CVBR®, PIS-CVBR®, and UC3M-CV1 dataset, are compared using two other different wrist contact databases, PUT and UC3M (best value of Equal Error Rate (EER) = 0.08%), taken into account and measured the computing time, demonstrating the viability of obtaining a contactless real-time-processing wrist system.Publicad

    Unconstrained Iris Recognition

    Get PDF
    This research focuses on iris recognition, the most accurate form of biometric identification. The robustness of iris recognition comes from the unique characteristics of the human, and the permanency of the iris texture as it is stable over human life, and the environmental effects cannot easily alter its shape. In most iris recognition systems, ideal image acquisition conditions are assumed. These conditions include a near infrared (NIR) light source to reveal the clear iris texture as well as look and stare constraints and close distance from the capturing device. However, the recognition accuracy of the-state-of-the-art systems decreases significantly when these constraints are relaxed. Recent advances have proposed different methods to process iris images captured in unconstrained environments. While these methods improve the accuracy of the original iris recognition system, they still have segmentation and feature selection problems, which results in high FRR (False Rejection Rate) and FAR (False Acceptance Rate) or in recognition failure. In the first part of this thesis, a novel segmentation algorithm for detecting the limbus and pupillary boundaries of human iris images with a quality assessment process is proposed. The algorithm first searches over the HSV colour space to detect the local maxima sclera region as it is the most easily distinguishable part of the human eye. The parameters from this stage are then used for eye area detection, upper/lower eyelid isolation and for rotation angle correction. The second step is the iris image quality assessment process, as the iris images captured under unconstrained conditions have heterogeneous characteristics. In addition, the probability of getting a mis-segmented sclera portion around the outer ring of the iris is very high, especially in the presence of reflection caused by a visible wavelength light source. Therefore, quality assessment procedures are applied for the classification of images from the first step into seven different categories based on the average of their RGB colour intensity. An appropriate filter is applied based on the detected quality. In the third step, a binarization process is applied to the detected eye portion from the first step for detecting the iris outer ring based on a threshold value defined on the basis of image quality from the second step. Finally, for the pupil area segmentation, the method searches over the HSV colour space for local minima pixels, as the pupil contains the darkest pixels in the human eye. In the second part, a novel discriminating feature extraction and selection based on the Curvelet transform are introduced. Most of the state-of-the-art iris recognition systems use the textural features extracted from the iris images. While these fine tiny features are very robust when extracted from high resolution clear images captured at very close distances, they show major weaknesses when extracted from degraded images captured over long distances. The use of the Curvelet transform to extract 2D geometrical features (curves and edges) from the degraded iris images addresses the weakness of 1D texture features extracted by the classical methods based on textural analysis wavelet transform. Our experiments show significant improvements in the segmentation and recognition accuracy when compared to the-state-of-the-art results

    Eye Detection and Face Recognition Across the Electromagnetic Spectrum

    Get PDF
    Biometrics, or the science of identifying individuals based on their physiological or behavioral traits, has increasingly been used to replace typical identifying markers such as passwords, PIN numbers, passports, etc. Different modalities, such as face, fingerprint, iris, gait, etc. can be used for this purpose. One of the most studied forms of biometrics is face recognition (FR). Due to a number of advantages over typical visible to visible FR, recent trends have been pushing the FR community to perform cross-spectral matching of visible images to face images from higher spectra in the electromagnetic spectrum.;In this work, the SWIR band of the EM spectrum is the primary focus. Four main contributions relating to automatic eye detection and cross-spectral FR are discussed. First, a novel eye localization algorithm for the purpose of geometrically normalizing a face across multiple SWIR bands for FR algorithms is introduced. Using a template based scheme and a novel summation range filter, an extensive experimental analysis show that this algorithm is fast, robust, and highly accurate when compared to other available eye detection methods. Also, the eye locations produced by this algorithm provides higher FR results than all other tested approaches. This algorithm is then augmented and updated to quickly and accurately detect eyes in more challenging unconstrained datasets, spanning the EM spectrum. Additionally, a novel cross-spectral matching algorithm is introduced that attempts to bridge the gap between the visible and SWIR spectra. By fusing multiple photometric normalization combinations, the proposed algorithm is not only more efficient than other visible-SWIR matching algorithms, but more accurate in multiple challenging datasets. Finally, a novel pre-processing algorithm is discussed that bridges the gap between document (passport) and live face images. It is shown that the pre-processing scheme proposed, using inpainting and denoising techniques, significantly increases the cross-document face recognition performance
    corecore