27 research outputs found

    AFMB-Net: DeepFake Detection Network Using Heart Rate Analysis

    Get PDF
    With advances in deepfake generating technology, it is getting increasingly difficult to detect deepfakes. Deepfakes can be used for many malpractices such as blackmail, politics, social media, etc. These can lead to widespread misinformation and can be harmful to an individual or an institutionā€™s reputation. It has become important to be able to identify deepfakes effectively, while there exist many machine learning techniques to identify them, these methods are not able to cope up with the rapidly improving GAN technology which is used to generate deepfakes. Our project aims to identify deepfakes successfully using machine learning along with Heart Rate Analysis. The heart rate identified by our model is unique to each individual and cannot be spoofed or imitated by a GAN and is thus susceptible to improving GAN technology. To solve the deepfake detection problem we employ various machine learning models along with heart rate analysis to detect deepfakes

    Do you know if I\u27m real? An experiment to benchmark human recognition of AI-generated faces

    Get PDF
    With the development of advanced machine learning techniques, it is now possible to generate fake images that may appear authentic to the naked eye. Realistic faces generated using Generative Adversarial Networks have been the focus of discussion in the media for exactly this reason. This study examined how well people can distinguish between real and generated images. 30 real and 60 generated were gathered and put into a survey. Subjects were shown a random 30 of these faces in random sequence and asked to specify whether or not they thought the faces were real. Based on a statistical analysis, the participants were not able to reliably distinguish between all real and generated images, but real images were correctly distinguished in 81% of cases, where generated images were correctly distinguished in 61% of cases. Some generated images did receive very high scores, with one generated image being classified as real in 100% of the cases

    Global Texture Enhancement for Fake Face Detection in the Wild

    Full text link
    Generative Adversarial Networks (GANs) can generate realistic fake face images that can easily fool human beings.On the contrary, a common Convolutional Neural Network(CNN) discriminator can achieve more than 99.9% accuracyin discerning fake/real images. In this paper, we conduct an empirical study on fake/real faces, and have two important observations: firstly, the texture of fake faces is substantially different from real ones; secondly, global texture statistics are more robust to image editing and transferable to fake faces from different GANs and datasets. Motivated by the above observations, we propose a new architecture coined as Gram-Net, which leverages global image texture representations for robust fake image detection. Experimental results on several datasets demonstrate that our Gram-Net outperforms existing approaches. Especially, our Gram-Netis more robust to image editings, e.g. down-sampling, JPEG compression, blur, and noise. More importantly, our Gram-Net generalizes significantly better in detecting fake faces from GAN models not seen in the training phase and can perform decently in detecting fake natural images
    corecore