16,238 research outputs found
Data-Driven Segmentation of Post-mortem Iris Images
This paper presents a method for segmenting iris images obtained from the
deceased subjects, by training a deep convolutional neural network (DCNN)
designed for the purpose of semantic segmentation. Post-mortem iris recognition
has recently emerged as an alternative, or additional, method useful in
forensic analysis. At the same time it poses many new challenges from the
technological standpoint, one of them being the image segmentation stage, which
has proven difficult to be reliably executed by conventional iris recognition
methods. Our approach is based on the SegNet architecture, fine-tuned with
1,300 manually segmented post-mortem iris images taken from the
Warsaw-BioBase-Post-Mortem-Iris v1.0 database. The experiments presented in
this paper show that this data-driven solution is able to learn specific
deformations present in post-mortem samples, which are missing from alive
irises, and offers a considerable improvement over the state-of-the-art,
conventional segmentation algorithm (OSIRIS): the Intersection over Union (IoU)
metric was improved from 73.6% (for OSIRIS) to 83% (for DCNN-based presented in
this paper) averaged over subject-disjoint, multiple splits of the data into
train and test subsets. This paper offers the first known to us method of
automatic processing of post-mortem iris images. We offer source codes with the
trained DCNN that perform end-to-end segmentation of post-mortem iris images,
as described in this paper. Also, we offer binary masks corresponding to manual
segmentation of samples from Warsaw-BioBase-Post-Mortem-Iris v1.0 database to
facilitate development of alternative methods for post-mortem iris
segmentation
Influence of segmentation on deep iris recognition performance
Despite the rise of deep learning in numerous areas of computer vision and
image processing, iris recognition has not benefited considerably from these
trends so far. Most of the existing research on deep iris recognition is
focused on new models for generating discriminative and robust iris
representations and relies on methodologies akin to traditional iris
recognition pipelines. Hence, the proposed models do not approach iris
recognition in an end-to-end manner, but rather use standard heuristic iris
segmentation (and unwrapping) techniques to produce normalized inputs for the
deep learning models. However, because deep learning is able to model very
complex data distributions and nonlinear data changes, an obvious question
arises. How important is the use of traditional segmentation methods in a deep
learning setting? To answer this question, we present in this paper an
empirical analysis of the impact of iris segmentation on the performance of
deep learning models using a simple two stage pipeline consisting of a
segmentation and a recognition step. We evaluate how the accuracy of
segmentation influences recognition performance but also examine if
segmentation is needed at all. We use the CASIA Thousand and SBVPI datasets for
the experiments and report several interesting findings.Comment: 6 pages, 3 figures, 3 tables, submitted to IWBF 201
An efficient iris image thresholding based on binarization threshold in black hole search method
In iris recognition system, the segmentation stage is one of the most important stages where the iris is located and then further segmented into outer and lower boundary of iris region. Several algorithms have been proposed in order to segment the outer and lower boundary of the iris region. The aim of this research is to identify the suitable threshold value in order to locate the outer and lower boundaries using Black Hole Search Method. We chose these methods because of the ineffient features of the other methods in image indetification and verifications. The experiment was conducted using three data set; UBIRIS, CASIA and MMU because of their superiority over others. Given that different iris databases have different file formats and quality, the images used for this work are jpeg and bmp. Based on the experimentation, most suitable threshold values for identification of iris aboundaries for different iris databases have been identified. It is therefore compared with the other methods used by other researchers and found out that the values of 0.3, 0.4 and 0.1 for database UBIRIS, CASIA and MMU respectively are more accurate and comprehensive. The study concludes that threshold values vary depending on the database
Comparison of Iris Recognition between Active Contour and Hough Transform
Research in iris recognition has been explosive in recent years. There are a few fundamental issues in iris recognition such as iris acquisition, iris segmentation, texture analysis and matching analysis that has been brought up. In this paper, we focus on a fundamental issue in iris segmentation which is segmentation accuracy. The accuracy of iris segmentation can be negatively affected because of poor segmentation of iris boundary. Iris boundary might have unsmooth, poor and unclear edges. Because of that, a method that can segment this type of boundary needs to be developed. A method based on active contour is proposed not only to increase the segmentation accuracy, but also to increase the recognition accuracy. The proposed method is compared with the modified Hough Transform method to observe the performance of both methods. Iris images from CASIA v4 are used for our experiment. According to results, the proposed method is better than the modified Hough Transform method in terms of segmentation accuracy, recognition accuracy and implementation time. This shows that the proposed method is more accurate than the Hough Transform method
Recommended from our members
Segmentation-level fusion for iris recognition
This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms
Improving Iris Recognition through Quality and Interoperability Metrics
The ability to identify individuals based on their iris is known as iris recognition. Over the past decade iris recognition has garnered much attention because of its strong performance in comparison with other mainstream biometrics such as fingerprint and face recognition. Performance of iris recognition systems is driven by application scenario requirements. Standoff distance, subject cooperation, underlying optics, and illumination are a few examples of these requirements which dictate the nature of images an iris recognition system has to process. Traditional iris recognition systems, dubbed stop and stare , operate under highly constrained conditions. This ensures that the captured image is of sufficient quality so that the success of subsequent processing stages, segmentation, encoding, and matching are not compromised. When acquisition constraints are relaxed, such as for surveillance or iris on the move, the fidelity of subsequent processing steps lessens.;In this dissertation we propose a multi-faceted framework for mitigating the difficulties associated with non-ideal iris. We develop and investigate a comprehensive iris image quality metric that is predictive of iris matching performance. The metric is composed of photometric measures such as defocus, motion blur, and illumination, but also contains domain specific measures such as occlusion, and gaze angle. These measures are then combined through a fusion rule based on Dempster-Shafer theory. Related to iris segmentation, which is arguably one of the most important tasks in iris recognition, we develop metrics which are used to evaluate the precision of the pupil and iris boundaries. Furthermore, we illustrate three methods which take advantage of the proposed segmentation metrics for rectifying incorrect segmentation boundaries. Finally, we look at the issue of iris image interoperability and demonstrate that techniques from the field of hardware fingerprinting can be utilized to improve iris matching performance when images captured from distinct sensors are involved
- …