11,609 research outputs found
A Noval Saliency Based Iris Segmentation Framework
Iris recognition is regarded as the most reliable and accurate biometric identification system available. In Iris recognition system, iris segmentation is the most time consuming and critical step, in concern to the accuracy of the process. This is a new method of iris segmentation based on the saliency map of an image. In this paper, first the saliency map which provides the visually important region has been designed to specifically locate the iris region. Then a color based masking is done to get the iris boundary. Since this method is computationally simple, a fast and reliable method which segment iris has been proposed. The reliability of the method has been checked with different eye images. The result also show that the proposed model is faster compared to the existing methods and gives a comparable accuracy
Pupil Detection Based on Color Difference and Circular Hough Transfor
Human pupil eye detection is a significant stage in iris segmentation which is representing one of the most important steps in iris recognition. In this paper, we present a new method of highly accurate pupil detection. This method is consisting of many steps to detect the boundary of the pupil. First, the read eye image (R, G, B), then determine the work area which is consist of many steps to detect the boundary of the pupil. The determination of the work area contains many circles which are larger than pupil region. The work area is necessary to determine pupil region and neighborhood regions afterward the difference in color and intensity between pupil region and surrounding area is utilized, where the pupil region has color and intensity less than surrounding area. After the process of detecting pupil region many steps on the resulting image is applied in order to concentrate the pupil region and delete the others regions by using many methods such as dilation, erosion, canny filter, circle hough transforms to detect pupil region as well as apply optimization to choose the best circle that represents the pupil area. The proposed method is applied for images from palacky university, it achieves to 100 % accura
Uncertainty Theories Based Iris Recognition System
The performance and robustness of the iris-based recognition systems still suffer from imperfection in the biometric information. This paper makes an attempt to address these imperfections and deals with important problem for real system. We proposed a new method for iris recognition system based on uncertainty theories to treat imperfection iris feature. Several factors cause different types of degradation in iris data such as the poor quality of the acquired pictures, the partial occlusion of the iris region due to light spots, or lenses, eyeglasses, hair or eyelids, and adverse illumination and/or contrast. All of these factors are open problems in the field of iris recognition and affect the performance of iris segmentation, its feature extraction or decision making process, and appear as imperfections in the extracted iris feature. The aim of our experiments is to model the variability and ambiguity in the iris data with the uncertainty theories. This paper illustrates the importance of the use of this theory for modeling or/and treating encountered imperfections. Several comparative experiments are conducted on two subsets of the CASIA-V4 iris image database namely Interval and Synthetic. Compared to a typical iris recognition system relying on the uncertainty theories, experimental results show that our proposed model improves the iris recognition system in terms of Equal Error Rates (EER), Area Under the receiver operating characteristics Curve (AUC) and Accuracy Recognition Rate (ARR) statistics.
Uncertainty Theories Based Iris Recognition System
The performance and robustness of the iris-based recognition systems still suffer from imperfection in the biometric information. This paper makes an attempt to address these imperfections and deals with important problem for real system. We proposed a new method for iris recognition system based on uncertainty theories to treat imperfection iris feature. Several factors cause different types of degradation in iris data such as the poor quality of the acquired pictures, the partial occlusion of the iris region due to light spots, or lenses, eyeglasses, hair or eyelids, and adverse illumination and/or contrast. All of these factors are open problems in the field of iris recognition and affect the performance of iris segmentation, its feature extraction or decision making process, and appear as imperfections in the extracted iris feature. The aim of our experiments is to model the variability and ambiguity in the iris data with the uncertainty theories. This paper illustrates the importance of the use of this theory for modeling or/and treating encountered imperfections. Several comparative experiments are conducted on two subsets of the CASIA-V4 iris image database namely Interval and Synthetic. Compared to a typical iris recognition system relying on the uncertainty theories, experimental results show that our proposed model improves the iris recognition system in terms of Equal Error Rates (EER), Area Under the receiver operating characteristics Curve (AUC) and Accuracy Recognition Rate (ARR) statistics
Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets
A data augmentation methodology is presented and applied to generate a large
dataset of off-axis iris regions and train a low-complexity deep neural
network. Although of low complexity the resulting network achieves a high level
of accuracy in iris region segmentation for challenging off-axis eye-patches.
Interestingly, this network is also shown to achieve high levels of performance
for regular, frontal, segmentation of iris regions, comparing favorably with
state-of-the-art techniques of significantly higher complexity. Due to its
lower complexity, this network is well suited for deployment in embedded
applications such as augmented and mixed reality headsets
Investigation on advanced image search techniques
Content-based image search for retrieval of images based on the similarity in their visual contents, such as color, texture, and shape, to a query image is an active research area due to its broad applications. Color, for example, provides powerful information for image search and classification. This dissertation investigates advanced image search techniques and presents new color descriptors for image search and classification and robust image enhancement and segmentation methods for iris recognition.
First, several new color descriptors have been developed for color image search. Specifically, a new oRGB-SIFT descriptor, which integrates the oRGB color space and the Scale-Invariant Feature Transform (SIFT), is proposed for image search and classification. The oRGB-SIFT descriptor is further integrated with other color SIFT features to produce the novel Color SIFT Fusion (CSF), the Color Grayscale SIFT Fusion (CGSF), and the CGSF+PHOG descriptors for image category search with applications to biometrics. Image classification is implemented using a novel EFM-KNN classifier, which combines the Enhanced Fisher Model (EFM) and the K Nearest Neighbor (KNN) decision rule. Experimental results on four large scale, grand challenge datasets have shown that the proposed oRGB-SIFT descriptor improves recognition performance upon other color SIFT descriptors, and the CSF, the CGSF, and the CGSF+PHOG descriptors perform better than the other color SIFT descriptors. The fusion of both Color SIFT descriptors (CSF) and Color Grayscale SIFT descriptor (CGSF) shows significant improvement in the classification performance, which indicates that various color-SIFT descriptors and grayscale-SIFT descriptor are not redundant for image search.
Second, four novel color Local Binary Pattern (LBP) descriptors are presented for scene image and image texture classification. Specifically, the oRGB-LBP descriptor is derived in the oRGB color space. The other three color LBP descriptors, namely, the Color LBP Fusion (CLF), the Color Grayscale LBP Fusion (CGLF), and the CGLF+PHOG descriptors, are obtained by integrating the oRGB-LBP descriptor with some additional image features. Experimental results on three large scale, grand challenge datasets have shown that the proposed descriptors can improve scene image and image texture classification performance.
Finally, a new iris recognition method based on a robust iris segmentation approach is presented for improving iris recognition performance. The proposed robust iris segmentation approach applies power-law transformations for more accurate detection of the pupil region, which significantly reduces the candidate limbic boundary search space for increasing detection accuracy and efficiency. As the limbic circle, which has a center within a close range of the pupil center, is selectively detected, the eyelid detection approach leads to improved iris recognition performance. Experiments using the Iris Challenge Evaluation (ICE) database show the effectiveness of the proposed method
Advancing iris biometric technology
PhD ThesisThe iris biometric is a well-established technology which is already in use in
several nation-scale applications and it is still an active research area with several
unsolved problems. This work focuses on three key problems in iris biometrics
namely: segmentation, protection and cross-matching. Three novel
methods in each of these areas are proposed and analyzed thoroughly.
In terms of iris segmentation, a novel iris segmentation method is designed
based on a fusion of an expanding and a shrinking active contour by integrating
a new pressure force within the Gradient Vector Flow (GVF) active
contour model. In addition, a new method for closed eye detection is proposed.
The experimental results on the CASIA V4, MMU2, UBIRIS V1 and
UBIRIS V2 databases show that the proposed method achieves state-of-theart
results in terms of segmentation accuracy and recognition performance
while being computationally more efficient. In this context, improvements
by 60.5%, 42% and 48.7% are achieved in segmentation accuracy for the
CASIA V4, MMU2 and UBIRIS V1 databases, respectively. For the UBIRIS
V2 database, a superior time reduction is reported (85.7%) while maintaining
a similar accuracy. Similarly, considerable time improvements by 63.8%,
56.6% and 29.3% are achieved for the CASIA V4, MMU2 and UBIRIS V1
databases, respectively.
With respect to iris biometric protection, a novel security architecture is designed
to protect the integrity of iris images and templates using watermarking
and Visual Cryptography (VC). Firstly, for protecting the iris image, text
which carries personal information is embedded in the middle band frequency
region of the iris image using a novel watermarking algorithm that randomly
interchanges multiple middle band pairs of the Discrete Cosine Transform
(DCT). Secondly, for iris template protection, VC is utilized to protect the
iii
iris template. In addition, the integrity of the stored template in the biometric
smart card is guaranteed by using the hash signatures. The proposed method
has a minimal effect on the iris recognition performance of only 3.6% and
4.9% for the CASIA V4 and UBIRIS V1 databases, respectively. In addition,
the VC scheme is designed to be readily applied to protect any biometric binary
template without any degradation to the recognition performance with a
complexity of only O(N).
As for cross-spectral matching, a framework is designed which is capable of
matching iris images in different lighting conditions. The first method is designed
to work with registered iris images where the key idea is to synthesize
the corresponding Near Infra-Red (NIR) images from the Visible Light (VL)
images using an Artificial Neural Network (ANN) while the second method
is capable of working with unregistered iris images based on integrating the
Gabor filter with different photometric normalization models and descriptors
along with decision level fusion to achieve the cross-spectral matching. A
significant improvement by 79.3% in cross-spectral matching performance is
attained for the UTIRIS database. As for the PolyU database, the proposed
verification method achieved an improvement by 83.9% in terms of NIR vs
Red channel matching which confirms the efficiency of the proposed method.
In summary, the most important open issues in exploiting the iris biometric
are presented and novel methods to address these problems are proposed.
Hence, this work will help to establish a more robust iris recognition system
due to the development of an accurate segmentation method working for iris
images taken under both the VL and NIR. In addition, the proposed protection
scheme paves the way for a secure iris images and templates storage.
Moreover, the proposed framework for cross-spectral matching will help to
employ the iris biometric in several security applications such as surveillance
at-a-distance and automated watch-list identification.Ministry of Higher Education and
Scientific Research in Ira
Data-Driven Segmentation of Post-mortem Iris Images
This paper presents a method for segmenting iris images obtained from the
deceased subjects, by training a deep convolutional neural network (DCNN)
designed for the purpose of semantic segmentation. Post-mortem iris recognition
has recently emerged as an alternative, or additional, method useful in
forensic analysis. At the same time it poses many new challenges from the
technological standpoint, one of them being the image segmentation stage, which
has proven difficult to be reliably executed by conventional iris recognition
methods. Our approach is based on the SegNet architecture, fine-tuned with
1,300 manually segmented post-mortem iris images taken from the
Warsaw-BioBase-Post-Mortem-Iris v1.0 database. The experiments presented in
this paper show that this data-driven solution is able to learn specific
deformations present in post-mortem samples, which are missing from alive
irises, and offers a considerable improvement over the state-of-the-art,
conventional segmentation algorithm (OSIRIS): the Intersection over Union (IoU)
metric was improved from 73.6% (for OSIRIS) to 83% (for DCNN-based presented in
this paper) averaged over subject-disjoint, multiple splits of the data into
train and test subsets. This paper offers the first known to us method of
automatic processing of post-mortem iris images. We offer source codes with the
trained DCNN that perform end-to-end segmentation of post-mortem iris images,
as described in this paper. Also, we offer binary masks corresponding to manual
segmentation of samples from Warsaw-BioBase-Post-Mortem-Iris v1.0 database to
facilitate development of alternative methods for post-mortem iris
segmentation
- …