819 research outputs found
Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis
Polarimetric thermal to visible face verification entails matching two images
that contain significant domain differences. Several recent approaches have
attempted to synthesize visible faces from thermal images for cross-modal
matching. In this paper, we take a different approach in which rather than
focusing only on synthesizing visible faces from thermal faces, we also propose
to synthesize thermal faces from visible faces. Our intuition is based on the
fact that thermal images also contain some discriminative information about the
person for verification. Deep features from a pre-trained Convolutional Neural
Network (CNN) are extracted from the original as well as the synthesized
images. These features are then fused to generate a template which is then used
for verification. The proposed synthesis network is based on the self-attention
generative adversarial network (SAGAN) which essentially allows efficient
attention-guided image synthesis. Extensive experiments on the ARL polarimetric
thermal face dataset demonstrate that the proposed method achieves
state-of-the-art performance.Comment: This work is accepted at the 12th IAPR International Conference On
Biometrics (ICB 2019
TV-GAN: Generative Adversarial Network Based Thermal to Visible Face Recognition
This work tackles the face recognition task on images captured using thermal
camera sensors which can operate in the non-light environment. While it can
greatly increase the scope and benefits of the current security surveillance
systems, performing such a task using thermal images is a challenging problem
compared to face recognition task in the Visible Light Domain (VLD). This is
partly due to the much smaller amount number of thermal imagery data collected
compared to the VLD data. Unfortunately, direct application of the existing
very strong face recognition models trained using VLD data into the thermal
imagery data will not produce a satisfactory performance. This is due to the
existence of the domain gap between the thermal and VLD images. To this end, we
propose a Thermal-to-Visible Generative Adversarial Network (TV-GAN) that is
able to transform thermal face images into their corresponding VLD images
whilst maintaining identity information which is sufficient enough for the
existing VLD face recognition models to perform recognition. Some examples are
presented in Figure 1. Unlike the previous methods, our proposed TV-GAN uses an
explicit closed-set face recognition loss to regularize the discriminator
network training. This information will then be conveyed into the generator
network in the forms of gradient loss. In the experiment, we show that by using
this additional explicit regularization for the discriminator network, the
TV-GAN is able to preserve more identity information when translating a thermal
image of a person which is not seen before by the TV-GAN
Removal of Spectro-Polarimetric Fringes by 2D Pattern Recognition
We present a pattern-recognition based approach to the problem of removal of
polarized fringes from spectro-polarimetric data. We demonstrate that 2D
Principal Component Analysis can be trained on a given spectro-polarimetric map
in order to identify and isolate fringe structures from the spectra. This
allows us in principle to reconstruct the data without the fringe component,
providing an effective and clean solution to the problem. The results presented
in this paper point in the direction of revising the way that science and
calibration data should be planned for a typical spectro-polarimetric observing
run.Comment: ApJ, in pres
Cross-Domain Identification for Thermal-to-Visible Face Recognition
Recent advances in domain adaptation, especially those applied to
heterogeneous facial recognition, typically rely upon restrictive Euclidean
loss functions (e.g., norm) which perform best when images from two
different domains (e.g., visible and thermal) are co-registered and temporally
synchronized. This paper proposes a novel domain adaptation framework that
combines a new feature mapping sub-network with existing deep feature models,
which are based on modified network architectures (e.g., VGG16 or Resnet50).
This framework is optimized by introducing new cross-domain identity and domain
invariance loss functions for thermal-to-visible face recognition, which
alleviates the requirement for precisely co-registered and synchronized
imagery. We provide extensive analysis of both features and loss functions
used, and compare the proposed domain adaptation framework with
state-of-the-art feature based domain adaptation models on a difficult dataset
containing facial imagery collected at varying ranges, poses, and expressions.
Moreover, we analyze the viability of the proposed framework for more
challenging tasks, such as non-frontal thermal-to-visible face recognition
- …