131 research outputs found

    Octuplet Loss: Make Face Recognition Robust to Image Resolution

    Full text link
    Image resolution, or in general, image quality, plays an essential role in the performance of today's face recognition systems. To address this problem, we propose a novel combination of the popular triplet loss to improve robustness against image resolution via fine-tuning of existing face recognition models. With octuplet loss, we leverage the relationship between high-resolution images and their synthetically down-sampled variants jointly with their identity labels. Fine-tuning several state-of-the-art approaches with our method proves that we can significantly boost performance for cross-resolution (high-to-low resolution) face verification on various datasets without meaningfully exacerbating the performance on high-to-high resolution images. Our method applied on the FaceTransformer network achieves 95.12% face verification accuracy on the challenging XQLFW dataset while reaching 99.73% on the LFW database. Moreover, the low-to-low face verification accuracy benefits from our method. We release our code to allow seamless integration of the octuplet loss into existing frameworks

    Facial Texture Super-Resolution by Fitting 3D Face Models

    Get PDF
    This book proposes to solve the low-resolution (LR) facial analysis problem with 3D face super-resolution (FSR). A complete processing chain is presented towards effective 3D FSR in real world. To deal with the extreme challenges of incorporating 3D modeling under the ill-posed LR condition, a novel workflow coupling automatic localization of 2D facial feature points and 3D shape reconstruction is developed, leading to a robust pipeline for pose-invariant hallucination of the 3D facial texture

    A Comprehensive Review of Deep Learning-based Single Image Super-resolution

    Get PDF
    Image super-resolution (SR) is one of the vital image processing methods that improve the resolution of an image in the field of computer vision. In the last two decades, significant progress has been made in the field of super-resolution, especially by utilizing deep learning methods. This survey is an effort to provide a detailed survey of recent progress in single-image super-resolution in the perspective of deep learning while also informing about the initial classical methods used for image super-resolution. The survey classifies the image SR methods into four categories, i.e., classical methods, supervised learning-based methods, unsupervised learning-based methods, and domain-specific SR methods. We also introduce the problem of SR to provide intuition about image quality metrics, available reference datasets, and SR challenges. Deep learning-based approaches of SR are evaluated using a reference dataset. Some of the reviewed state-of-the-art image SR methods include the enhanced deep SR network (EDSR), cycle-in-cycle GAN (CinCGAN), multiscale residual network (MSRN), meta residual dense network (Meta-RDN), recurrent back-projection network (RBPN), second-order attention network (SAN), SR feedback network (SRFBN) and the wavelet-based residual attention network (WRAN). Finally, this survey is concluded with future directions and trends in SR and open problems in SR to be addressed by the researchers.Comment: 56 Pages, 11 Figures, 5 Table

    Methods for data-related problems in person re-ID

    Get PDF
    In the last years, the ever-increasing need for public security has attracted wide attention in person re-ID. State-of-the-art techniques have achieved impressive results on academic datasets, which are nearly saturated. However, when it comes to deploying a re-ID system in a practical surveillance scenario, several challenges arise. 1) Full person views are often unavailable, and missing body parts make the comparison very challenging due to significant misalignment of the views. 2) Low diversity in training data introduces bias in re-ID systems. 3) The available data might come from different modalities, e.g., text and images. This thesis proposes Partial Matching Net (PMN) that detects body joints, aligns partial views, and hallucinates the missing parts based on the information present in the frame and a learned model of a person. The aligned and reconstructed views are then combined into a joint representation and used for matching images. The thesis also investigates different types of bias that typically occur in re-ID scenarios when the similarity between two persons is due to the same pose, body part, or camera view, rather than to the ID-related cues. It proposes a general approach to mitigate these effects named Bias-Control (BC) framework with two training streams leveraging adversarial and multitask learning to reduce bias-related features. Finally, the thesis investigates a novel mechanism for matching data across visual and text modalities. It proposes a framework Text (TAVD) with two complementary modules: Text attribute feature aggregation (TA) that aggregates multiple semantic attributes in a bimodal space for globally matching text descriptions with images and Visual feature decomposition (VD) which performs feature embedding for locally matching image regions with text attributes. The results and comparison to state of the art on different benchmarks show that the proposed solutions are effective strategies for person re-ID.Open Acces
    • …
    corecore