6 research outputs found

    Exploiting Human Social Cognition for the Detection of Fake and Fraudulent Faces via Memory Networks

    Full text link
    Advances in computer vision have brought us to the point where we have the ability to synthesise realistic fake content. Such approaches are seen as a source of disinformation and mistrust, and pose serious concerns to governments around the world. Convolutional Neural Networks (CNNs) demonstrate encouraging results when detecting fake images that arise from the specific type of manipulation they are trained on. However, this success has not transitioned to unseen manipulation types, resulting in a significant gap in the line-of-defense. We propose a Hierarchical Memory Network (HMN) architecture, which is able to successfully detect faked faces by utilising knowledge stored in neural memories as well as visual cues to reason about the perceived face and anticipate its future semantic embeddings. This renders a generalisable face tampering detection framework. Experimental results demonstrate the proposed approach achieves superior performance for fake and fraudulent face detection compared to the state-of-the-art

    A Comprehensive Survey on Deepfake Methods: Generation, Detection, and Applications

    Get PDF
    Due to recent advancements in AI and deep learning, several methods and tools for multimedia transformation, known as deepfake, have emerged. A deepfake is a synthetic media where a person's resemblance is used to substitute their presence in an already-existing image or video. Deepfakes have both positive and negative implications. They can be used in politics to simulate events or speeches, in translation to provide natural-sounding translations, in education for virtual experiences, and in entertainment for realistic special effects. The emergence of deepfake face forgery on the internet has raised significant societal concerns. As a result, detecting these forgeries has become an emerging field of research, and many deepfake detection methods have been proposed. This paper has introduced deepfakes and explained the different types of deepfakes that exist. It also explains a summary of various deep fake generation techniques, both traditional and AI detection techniques. Datasets used for deepfake-generating that are freely accessible are emphasized. To further advance the deepfake research field, we aim to provide relevant research findings, identify existing gaps, and propose emerging trends for future study

    Exploiting Human Social Cognition for the Detection of Fake and Fraudulent Faces via Memory Networks

    No full text
    Advances in computer vision have brought us to the point where we have the ability to synthesise realistic fake content. Such approaches are seen as a source of disinformation and mistrust, and pose serious concerns to governments around the world. Convolutional Neural Networks (CNNs) demonstrate encouraging results when detecting fake images that arise from the specific type of manipulation they are trained on. However, this success has not transitioned to unseen manipulation types, resulting in a significant gap in the line-of-defense. We propose a Hierarchical Memory Network (HMN) architecture, which is able to successfully detect faked faces by utilising knowledge stored in neural memories as well as visual cues to reason about the perceived face and anticipate its future semantic embeddings. This renders a generalisable face tampering detection framework. Experimental results demonstrate the proposed approach achieves superior performance for fake and fraudulent face detection compared to the state-of-the-art

    Media Forensics and DeepFakes: an overview

    Full text link
    With the rapid progress of recent years, techniques that generate and manipulate multimedia content can now guarantee a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images and videos. So-called deepfakes can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Potential abuses are limited only by human imagination. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information. This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos. Special emphasis will be placed on the emerging phenomenon of deepfakes and, from the point of view of the forensic analyst, on modern data-driven forensic methods. The analysis will help to highlight the limits of current forensic tools, the most relevant issues, the upcoming challenges, and suggest future directions for research
    corecore