18,110 research outputs found
Optimal Iris Fuzzy Sketches
Fuzzy sketches, introduced as a link between biometry and cryptography, are a
way of handling biometric data matching as an error correction issue. We focus
here on iris biometrics and look for the best error-correcting code in that
respect. We show that two-dimensional iterative min-sum decoding leads to
results near the theoretical limits. In particular, we experiment our
techniques on the Iris Challenge Evaluation (ICE) database and validate our
findings.Comment: 9 pages. Submitted to the IEEE Conference on Biometrics: Theory,
Applications and Systems, 2007 Washington D
Personal Authentication System Based Iris Recognition with Digital Signature Technology
Authentication based on biometrics is being used to prevent physical access to high-security institutions. Recently, due to the rapid rise of information system technologies, Biometrics are now being used in applications for accessing databases and commercial workflow systems. These applications need to implement measures to counter security threats. Many developers are exploring and developing novel authentication techniques to prevent these attacks. However, the most difficult problem is how to keep biometric data while maintaining the practical performance of identity verification systems. This paper presents a biometrics-based personal authentication system in which a smart card, a Public Key Infrastructure (PKI), and iris verification technologies are combined. Raspberry Pi 4 Model B+ is used as the core of hardware components with an IR Camera. Following that idea, we designed an optimal image processing algorithm in OpenCV/ Python, Keras, and sci-kit learn libraries for feature extraction and recognition is chosen for application development in this project. The implemented system gives an accuracy of (97% and 100%) for the left and right (NTU) iris datasets respectively after training. Later, the person verification based on the iris feature is performed to verify the claimed identity and examine the system authentication. The time of key generation, Signature, and Verification is 5.17sec,0.288, and 0.056 respectively for the NTU iris dataset. This work offers the realistic architecture to implement identity-based cryptography with biometrics using the RSA algorithm
Pigment Melanin: Pattern for Iris Recognition
Recognition of iris based on Visible Light (VL) imaging is a difficult
problem because of the light reflection from the cornea. Nonetheless, pigment
melanin provides a rich feature source in VL, unavailable in Near-Infrared
(NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical
not stimulated in NIR. In this case, a plausible solution to observe such
patterns may be provided by an adaptive procedure using a variational technique
on the image histogram. To describe the patterns, a shape analysis method is
used to derive feature-code for each subject. An important question is how much
the melanin patterns, extracted from VL, are independent of iris texture in
NIR. With this question in mind, the present investigation proposes fusion of
features extracted from NIR and VL to boost the recognition performance. We
have collected our own database (UTIRIS) consisting of both NIR and VL images
of 158 eyes of 79 individuals. This investigation demonstrates that the
proposed algorithm is highly sensitive to the patterns of cromophores and
improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on
Instruments and Measurements, Volume 59, Issue number 4, April 201
Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets
A data augmentation methodology is presented and applied to generate a large
dataset of off-axis iris regions and train a low-complexity deep neural
network. Although of low complexity the resulting network achieves a high level
of accuracy in iris region segmentation for challenging off-axis eye-patches.
Interestingly, this network is also shown to achieve high levels of performance
for regular, frontal, segmentation of iris regions, comparing favorably with
state-of-the-art techniques of significantly higher complexity. Due to its
lower complexity, this network is well suited for deployment in embedded
applications such as augmented and mixed reality headsets
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
- …