174 research outputs found

    A Noval Approach for Face Spoof Detection using Color-Texture, Distortion and Quality Parameters

    Get PDF
    Face spoof detection technique is used in many applications to check whether the given face is spoofed or not. It helps to detect the fake faces from genuine ones. An efficient proposed method for face spoofing detection is based on color-texture, image distortion and image quality parameters. The faces are detected from a compressed format image. The color-texture information from the luminance and chrominance channels extracted using Local Binary Pattern descriptor. The image distortion and image quality parameters are extracted from the same color space. The aim of this method is to bring together the advantages of these methods inorder to improve the accuracy of face spoofing detection. Multiclass SVM classifier is used to train each features of data and detect different face spoof attack. This paper describe a novel and appealing approach for detecting the fake faces from genuine ones using a color-texture combine with image distortion and image quality parameters. More importantly, the proposed method provides more accuracy, other than the method that described in the literature. It helps to separate the original face and fake face clearly and define the type of attack

    Evaluation of Deep Learning and Conventional Approaches for Image Recaptured Detection in Multimedia Forensics

    Get PDF
    Image recaptured from a high-resolution LED screen or a good quality printer is difficult to distinguish from its original counterpart. The forensic community paid less attention to this type of forgery than to other image alterations such as splicing, copy-move, removal, or image retouching. It is significant to develop secure and automatic techniques to distinguish real and recaptured images without prior knowledge. Image manipulation traces can be hidden using recaptured images. For this reason, being able to detect recapture images becomes a hot research topic for a forensic analyst. The attacker can recapture the manipulated images to fool image forensic system. As far as we know, there is no prior research that has examined the pros and cons of up-to-date image recaptured techniques. The main objective of this survey was to succinctly review the recent outcomes in the field of image recaptured detection and investigated the limitations in existing approaches and datasets. The outcome of this study provides several promising directions for further significant research on image recaptured detection. Finally, some of the challenges in the existing datasets and numerous promising directions on recaptured image detection are proposed to demonstrate how these difficulties might be carried into promising directions for future research. We also discussed the existing image recaptured datasets, their limitations, and dataset collection challenges.publishedVersio

    Attacking and Defending Printer Source Attribution Classifiers in the Physical Domain

    Get PDF
    The security of machine learning classifiers has received increasing attention in the last years. In forensic applications, guaranteeing the security of the tools investigators rely on is crucial, since the gathered evidence may be used to decide about the innocence or the guilt of a suspect. Several adversarial attacks were proposed to assess such security, with a few works focusing on transferring such attacks from the digital to the physical domain. In this work, we focus on physical domain attacks against source attribution of printed documents. We first show how a simple reprinting attack may be sufficient to fool a model trained on images that were printed and scanned only once. Then, we propose a hardened version of the classifier trained on the reprinted attacked images. Finally, we attack the hardened classifier with several attacks, including a new attack based on the Expectation Over Transformation approach, which finds the adversarial perturbations by simulating the physical transformations occurring when the image attacked in the digital domain is printed again. The results we got demonstrate a good capability of the hardened classifier to resist attacks carried out in the physical domai
    • …
    corecore