92 research outputs found
Combining 3D and 2D for less constrained periocular recognition
Periocular recognition has recently become an active
topic in biometrics. Typically it uses 2D image data of
the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors
Periocular Biometrics: A Modality for Unconstrained Scenarios
Periocular refers to the region of the face that surrounds the eye socket.
This is a feature-rich area that can be used by itself to determine the
identity of an individual. It is especially useful when the iris or the face
cannot be reliably acquired. This can be the case of unconstrained or
uncooperative scenarios, where the face may appear partially occluded, or the
subject-to-camera distance may be high. However, it has received revived
attention during the pandemic due to masked faces, leaving the ocular region as
the only visible facial area, even in controlled scenarios. This paper
discusses the state-of-the-art of periocular biometrics, giving an overall
framework of its most significant research aspects
Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone Periocular Recognition
The proliferation of cameras and personal devices results in a wide
variability of imaging conditions, producing large intra-class variations and a
significant performance drop when images from heterogeneous environments are
compared. However, many applications require to deal with data from different
sources regularly, thus needing to overcome these interoperability problems.
Here, we employ fusion of several comparators to improve periocular performance
when images from different smartphones are compared. We use a probabilistic
fusion framework based on linear logistic regression, in which fused scores
tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of
up to 40% due to the fusion. Our framework also provides an elegant and simple
solution to handle signals from different devices, since same-sensor and
cross-sensor score distributions are aligned and mapped to a common
probabilistic domain. This allows the use of Bayes thresholds for optimal
decision-making, eliminating the need of sensor-specific thresholds, which is
essential in operational conditions because the threshold setting critically
determines the accuracy of the authentication process in many applications.Comment: Published at Proc. 25th European Signal Processing Conference,
EUSIPCO 2017. arXiv admin note: text overlap with arXiv:1902.0812
- …