20,994 research outputs found

    Interpolation of Low Resolution Images for Improved Accuracy in Human Face Recognition

    Get PDF
    In a wide range of face recognition applications, such as the surveillance camera in law enforcement, it is cannot provide enough resolution of face for recognition. The first part of this research demonstrates the impact of the image resolution on the performance of the face recognition system. The performance of several holistic face recognition algorithms is evaluated for low-resolution face images. For the classification, this research uses k-nearest neighbor (k-NN) and Extreme Learning Machine-based neural network (ELM). The recognition rate of these systems is a function in the image resolution. In the second part of this research, nearest neighbor, bilinear, and bicubic interpolation techniques are applies as a preprocessing step to increase the resolution of the input image to obtain better results. The results show that increasing the image resolution using the mentioned interpolation methods improves the performance of the recognition systems considerably

    Human face recognition under degraded conditions

    Get PDF
    Comparative studies on the state of the art feature extraction and classification techniques for human face recognition under low resolution problem, are proposed in this work. Also, the effect of applying resolution enhancement, using interpolation techniques, is evaluated. A gradient-based illumination insensitive preprocessing technique is proposed using the ratio between the gradient magnitude and the current intensity level of image which is insensitive against severe level of lighting effect. Also, a combination of multi-scale Weber analysis and enhanced DD-DT-CWT is demonstrated to have a noticeable stability versus illumination variation. Moreover, utilization of the illumination insensitive image descriptors on the preprocessed image leads to further robustness against lighting effect. The proposed block-based face analysis decreases the effect of occlusion by devoting different weights to the image subblocks, according to their discrimination power, in the score or decision level fusion. In addition, a hierarchical structure of global and block-based techniques is proposed to improve the recognition accuracy when different image degraded conditions occur. Complementary performance of global and local techniques leads to considerable improvement in the face recognition accuracy. Effectiveness of the proposed algorithms are evaluated on Extended Yale B, AR, CMU Multi-PIE, LFW, FERET and FRGC databases with large number of images under different degradation conditions. The experimental results show an improved performance under poor illumination, facial expression and, occluded images

    Video Synthesis from the StyleGAN Latent Space

    Get PDF
    Generative models have shown impressive results in generating synthetic images. However, video synthesis is still difficult to achieve, even for these generative models. The best videos that generative models can currently create are a few seconds long, distorted, and low resolution. For this project, I propose and implement a model to synthesize videos at 1024x1024x32 resolution that include human facial expressions by using static images generated from a Generative Adversarial Network trained on the human facial images. To the best of my knowledge, this is the first work that generates realistic videos that are larger than 256x256 resolution from single starting images. This model improves the video synthesis in both quantitative and qualitative ways compared to two state-of-the-art models: TGAN and MocoGAN. In a quantitative comparison, this project reaches a best Average Content Distance (ACD) score of 0.167, as compared to 0.305 and 0.201 of TGAN and MocoGAN, respectively
    corecore