25 research outputs found
From Gabor Magnitude to Gabor Phase Features: Tackling the Problem of Face Recognition under Severe Illumination Changes
Among the numerous biometric systems presented in the literature, face recognition systems have received a great deal of attention in recent years. The main driving force in the development of these systems can be found in the enormous potential face recognition technology has in various application domains ranging from access control, human-machin
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
Synthetic Data for Face Recognition: Current State and Future Prospects
Over the past years, deep learning capabilities and the availability of
large-scale training datasets advanced rapidly, leading to breakthroughs in
face recognition accuracy. However, these technologies are foreseen to face a
major challenge in the next years due to the legal and ethical concerns about
using authentic biometric data in AI model training and evaluation along with
increasingly utilizing data-hungry state-of-the-art deep learning models. With
the recent advances in deep generative models and their success in generating
realistic and high-resolution synthetic image data, privacy-friendly synthetic
data has been recently proposed as an alternative to privacy-sensitive
authentic data to overcome the challenges of using authentic data in face
recognition development. This work aims at providing a clear and structured
picture of the use-cases taxonomy of synthetic face data in face recognition
along with the recent emerging advances of face recognition models developed on
the bases of synthetic data. We also discuss the challenges facing the use of
synthetic data in face recognition development and several future prospects of
synthetic data in the domain of face recognition.Comment: Accepted at Image and Vision Computing 2023 (IVC 2023
Fairness in Face Presentation Attack Detection
Face presentation attack detection (PAD) is critical to secure face
recognition (FR) applications from presentation attacks. FR performance has
been shown to be unfair to certain demographic and non-demographic groups.
However, the fairness of face PAD is an understudied issue, mainly due to the
lack of appropriately annotated data. To address this issue, this work first
presents a Combined Attribute Annotated PAD Dataset (CAAD-PAD) by combining
several well-known PAD datasets where we provide seven human-annotated
attribute labels. This work then comprehensively analyses the fairness of a set
of face PADs and its relation to the nature of training data and the
Operational Decision Threshold Assignment (ODTA) on different data groups by
studying four face PAD approaches on our CAAD-PAD. To simultaneously represent
both the PAD fairness and the absolute PAD performance, we introduce a novel
metric, namely the Accuracy Balanced Fairness (ABF). Extensive experiments on
CAAD-PAD show that the training data and ODTA induce unfairness on gender,
occlusion, and other attribute groups. Based on these analyses, we propose a
data augmentation method, FairSWAP, which aims to disrupt the identity/semantic
information and guide models to mine attack cues rather than attribute-related
information. Detailed experimental results demonstrate that FairSWAP generally
enhances both the PAD performance and the fairness of face PAD
SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes
Modern deepfake detectors have achieved encouraging results, when training
and test images are drawn from the same collection. However, when applying
these detectors to faces manipulated using an unknown technique, considerable
performance drops are typically observed. In this work, we propose a novel
deepfake detector, called SeeABLE, that formalizes the detection problem as a
(one-class) out-of-distribution detection task and generalizes better to unseen
deepfakes. Specifically, SeeABLE uses a novel data augmentation strategy to
synthesize fine-grained local image anomalies (referred to as
soft-discrepancies) and pushes those pristine disrupted faces towards
predefined prototypes using a novel regression-based bounded contrastive loss.
To strengthen the generalization performance of SeeABLE to unknown deepfake
types, we generate a rich set of soft discrepancies and train the detector: (i)
to localize, which part of the face was modified, and (ii) to identify the
alteration type. Using extensive experiments on widely used datasets, SeeABLE
considerably outperforms existing detectors, with gains of up to +10\% on the
DFDC-preview dataset in term of detection accuracy over SoTA methods while
using a simpler model. Code will be made publicly available
Facial landmark localization in depth images using supervised ridge descent
Berk Gökberk (MEF Author)Supervised Descent Method (SDM) has proven successful in many computer vision applications such as face alignment, tracking and camera calibration. Recent studies which used SDM, achieved state of the-art performance on facial landmark localization in depth images [4]. In this study, we propose to use ridge regression instead of least squares regression for learning the SDM, and to change feature sizes in each iteration, effectively turning the landmark search into a coarse to fine process. We apply the proposed method to facial landmark localization on the Bosphorus 3D Face Database; using frontal depth images with no occlusion. Experimental results confirm that both ridge regression and using adaptive feature sizes improve the localization accuracy considerably.WOS:000380434700048Scopus - Affiliation ID: 60105072Conference Proceedings Citation Index- ScienceProceedings PaperAralık2015YÖK - 2015-1