280 research outputs found
Octuplet Loss: Make Face Recognition Robust to Image Resolution
Image resolution, or in general, image quality, plays an essential role in
the performance of today's face recognition systems. To address this problem,
we propose a novel combination of the popular triplet loss to improve
robustness against image resolution via fine-tuning of existing face
recognition models. With octuplet loss, we leverage the relationship between
high-resolution images and their synthetically down-sampled variants jointly
with their identity labels. Fine-tuning several state-of-the-art approaches
with our method proves that we can significantly boost performance for
cross-resolution (high-to-low resolution) face verification on various datasets
without meaningfully exacerbating the performance on high-to-high resolution
images. Our method applied on the FaceTransformer network achieves 95.12% face
verification accuracy on the challenging XQLFW dataset while reaching 99.73% on
the LFW database. Moreover, the low-to-low face verification accuracy benefits
from our method. We release our code to allow seamless integration of the
octuplet loss into existing frameworks
Joint-SRVDNet: Joint Super Resolution and Vehicle Detection Network
In many domestic and military applications, aerial vehicle detection and
super-resolutionalgorithms are frequently developed and applied independently.
However, aerial vehicle detection on super-resolved images remains a
challenging task due to the lack of discriminative information in the
super-resolved images. To address this problem, we propose a Joint
Super-Resolution and Vehicle DetectionNetwork (Joint-SRVDNet) that tries to
generate discriminative, high-resolution images of vehicles fromlow-resolution
aerial images. First, aerial images are up-scaled by a factor of 4x using a
Multi-scaleGenerative Adversarial Network (MsGAN), which has multiple
intermediate outputs with increasingresolutions. Second, a detector is trained
on super-resolved images that are upscaled by factor 4x usingMsGAN architecture
and finally, the detection loss is minimized jointly with the super-resolution
loss toencourage the target detector to be sensitive to the subsequent
super-resolution training. The network jointlylearns hierarchical and
discriminative features of targets and produces optimal super-resolution
results. Weperform both quantitative and qualitative evaluation of our proposed
network on VEDAI, xView and DOTAdatasets. The experimental results show that
our proposed framework achieves better visual quality than thestate-of-the-art
methods for aerial super-resolution with 4x up-scaling factor and improves the
accuracy ofaerial vehicle detection
Multi Scale Identity-Preserving Image-to-Image Translation Network for Low-Resolution Face Recognition
State-of-the-art deep neural network models have reached near perfect face
recognition accuracy rates on controlled high-resolution face images. However,
their performance is drastically degraded when they are tested with very
low-resolution face images. This is particularly critical in surveillance
systems, where a low-resolution probe image is to be matched with
high-resolution gallery images. super-resolution techniques aim at producing
high-resolution face images from low-resolution counterparts. While they are
capable of reconstructing images that are visually appealing, the
identity-related information is not preserved. Here, we propose an
identity-preserving end-to-end image-to-image translation deep neural network
which is capable of super-resolving very low-resolution faces to their
high-resolution counterparts while preserving identity-related information. We
achieved this by training a very deep convolutional encoder-decoder network
with a symmetric contracting path between corresponding layers. This network
was trained with a combination of a reconstruction and an identity-preserving
loss, on multi-scale low-resolution conditions. Extensive quantitative
evaluations of our proposed model demonstrated that it outperforms competing
super-resolution and low-resolution face recognition methods on natural and
artificial low-resolution face data sets and even unseen identities
A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal
Face Restoration (FR) aims to restore High-Quality (HQ) faces from
Low-Quality (LQ) input images, which is a domain-specific image restoration
problem in the low-level computer vision area. The early face restoration
methods mainly use statistic priors and degradation models, which are difficult
to meet the requirements of real-world applications in practice. In recent
years, face restoration has witnessed great progress after stepping into the
deep learning era. However, there are few works to study deep learning-based
face restoration methods systematically. Thus, this paper comprehensively
surveys recent advances in deep learning techniques for face restoration.
Specifically, we first summarize different problem formulations and analyze the
characteristic of the face image. Second, we discuss the challenges of face
restoration. Concerning these challenges, we present a comprehensive review of
existing FR methods, including prior based methods and deep learning-based
methods. Then, we explore developed techniques in the task of FR covering
network architectures, loss functions, and benchmark datasets. We also conduct
a systematic benchmark evaluation on representative methods. Finally, we
discuss future directions, including network designs, metrics, benchmark
datasets, applications,etc. We also provide an open-source repository for all
the discussed methods, which is available at
https://github.com/TaoWangzj/Awesome-Face-Restoration.Comment: 21 pages, 19 figure
- …