7 research outputs found

    Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning

    Get PDF
    Face hallucination is a technique that reconstruct high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of image patch. In addition, when they are confronted with misalignment or the Small Sample Size (SSS) problem, the hallucination performance is very poor. To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL). Under the context-patch based framework, we advance a thresholding based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulates the case that the HR version of the input LR face is present in the training set, thus iteratively enhancing the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. Additionally, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real-world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL

    Face Hallucination Using Linear Models of Coupled Sparse Support

    Get PDF
    International audienceMost face super-resolution methods assume that low-and high-resolution manifolds have similar local geometrical structure, hence learn local models on the low-resolution manifold (e.g. sparse or locally linear embedding models), which are then applied on the high-resolution manifold. However, the low-resolution manifold is distorted by the one-to-many relationship between low-and high-resolution patches. This paper presents the Linear Model of Coupled Sparse Support (LM-CSS) method which learns linear models based on the local geometrical structure on the high-resolution manifold rather than on the low-resolution manifold. For this, in a first step, the low-resolution patch is used to derive a globally optimal estimate of the high-resolution patch. The approximated solution is shown to be close in Euclidean space to the ground-truth but is generally smooth and lacks the texture details needed by state-of-the-art face recognizers. Unlike existing methods, the sparse support that best estimates the first approximated solution is found on the high-resolution manifold. The derived support is then used to extract the atoms from the coupled low-and high-resolution dictionaries that are most suitable to learn an up-scaling function for every facial region. The proposed solution was also extended to compute face super-resolution of non-frontal images. Extensive experimental results conducted on a total of 1830 facial images show that the proposed method outperforms seven face super-resolution and a state-of-the-art cross-resolution face recognition method in terms of both quality and recognition. The best recognition performance was achieved using LM-CSS followed by the Local Binary Pattern (LBP) face recognizer, where it was found to outperform the state-of-the-art Discriminant Face Descriptor (DFD) very-low resolution face recognition system, achieving rank-1 recognition gains between 34% and 60% at very low-resolutions. Moreover, subjective results show that the proposed solution is able to super-resolve more accurate facial images from the challenging IARPA Janus Benchmark A (IJB-A) dataset, which considers a wide range of poses and orientations at magnification factors as high as five

    Face Hallucination Using Linear Models of Coupled Sparse Support

    No full text
    corecore