76 research outputs found

    Deep Fakes: The Algorithms That Create and Detect Them and the National Security Risks They Pose

    Get PDF
    The dissemination of deep fakes for nefarious purposes poses significant national security risks to the United States, requiring an urgent development of technologies to detect their use and strategies to mitigate their effects. Deep fakes are images and videos created by or with the assistance of AI algorithms in which a person’s likeness, actions, or words have been replaced by someone else’s to deceive an audience. Often created with the help of generative adversarial networks, deep fakes can be used to blackmail, harass, exploit, and intimidate individuals and businesses; in large-scale disinformation campaigns, they can incite political tensions around the world and within the U.S. Their broader implication is a deepening challenge to truth in public discourse. The U.S. government, independent researchers, and private companies must collaborate to improve the effectiveness and generalizability of detection methods that can stop the spread of deep fakes

    From deepfake to deep useful: risks and opportunities through a systematic literature review

    Full text link
    Deepfake videos are defined as a resulting media from the synthesis of different persons images and videos, mostly faces, replacing a real one. The easy spread of such videos leads to elevated misinformation and represents a threat to society and democracy today. The present study aims to collect and analyze the relevant literature through a systematic procedure. We present 27 articles from scientific databases revealing threats to society, democracies, the political life but present as well advantages of this technology in entertainment, gaming, education, and public life. The research indicates high scientific interest in deepfake detection algorithms as well as the ethical aspect of such technology. This article covers the scientific gap since, to the best of our knowledge, this is the first systematic literature review in the field. A discussion has already started among academics and practitioners concerning the spread of fake news. The next step of fake news considers the use of artificial intelligence and machine learning algorithms that create hyper-realistic videos, called deepfake. Deepfake technology has continuously attracted the attention of scholars over the last 3 years more and more. The importance of conducting research in this field derives from the necessity to understand the theory. The first contextual approach is related to the epistemological points of view of the concept. The second one is related to the phenomenological disadvantages of the field. Despite that, the authors will try to focus not only on the disadvantages of the field but also on the positive aspects of the technology.Comment: 7 pages, IADIS International Conference e-Society (2022

    ResViT: A Framework for Deepfake Videos Detection

    Get PDF
    Deepfake makes it quite easy to synthesize videos or images using deep learning techniques, which leads to substantial danger and worry for most of the world\u27s renowned people. Spreading false news or synthesizing one\u27s video or image can harm people and their lack of trust on social and electronic media. To efficiently identify deepfake images, we propose ResViT, which uses the ResNet model for feature extraction, while the vision transformer is used for classification. The ResViT architecture uses the feature extractor to extract features from the images of the videos, which are used to classify the input as fake or real. Moreover, the ResViT architectures focus equally on data pre-processing, as it improves performance. We conducted extensive experiments on the five mostly used datasets our results with the baseline model on the following datasets of Celeb-DF, Celeb-DFv2, FaceForensics++, FF-Deepfake Detection, and DFDC2. Our analysis revealed that ResViT performed better than the baseline and achieved the prediction accuracy of 80.48%, 87.23%, 75.62%, 78.45%, and 84.55% on Celeb-DF, Celeb-DFv2, FaceForensics++, FF-Deepfake Detection, and DFDC2 datasets, respectively
    corecore