272 research outputs found

    DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection

    Full text link
    The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news. This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations. In particular, four types of facial manipulation are reviewed: i) entire face synthesis, ii) identity swap (DeepFakes), iii) attribute manipulation, and iv) expression swap. For each manipulation group, we provide details regarding manipulation techniques, existing public databases, and key benchmarks for technology evaluation of fake detection methods, including a summary of results from those evaluations. Among all the aspects discussed in the survey, we pay special attention to the latest generation of DeepFakes, highlighting its improvements and challenges for fake detection. In addition to the survey information, we also discuss open issues and future trends that should be considered to advance in the field

    DF-TransFusion: Multimodal Deepfake Detection via Lip-Audio Cross-Attention and Facial Self-Attention

    Full text link
    With the rise in manipulated media, deepfake detection has become an imperative task for preserving the authenticity of digital content. In this paper, we present a novel multi-modal audio-video framework designed to concurrently process audio and video inputs for deepfake detection tasks. Our model capitalizes on lip synchronization with input audio through a cross-attention mechanism while extracting visual cues via a fine-tuned VGG-16 network. Subsequently, a transformer encoder network is employed to perform facial self-attention. We conduct multiple ablation studies highlighting different strengths of our approach. Our multi-modal methodology outperforms state-of-the-art multi-modal deepfake detection techniques in terms of F-1 and per-video AUC scores

    Deep Insights of Deepfake Technology : A Review

    Full text link
    Under the aegis of computer vision and deep learning technology, a new emerging techniques has introduced that anyone can make highly realistic but fake videos, images even can manipulates the voices. This technology is widely known as Deepfake Technology. Although it seems interesting techniques to make fake videos or image of something or some individuals but it could spread as misinformation via internet. Deepfake contents could be dangerous for individuals as well as for our communities, organizations, countries religions etc. As Deepfake content creation involve a high level expertise with combination of several algorithms of deep learning, it seems almost real and genuine and difficult to differentiate. In this paper, a wide range of articles have been examined to understand Deepfake technology more extensively. We have examined several articles to find some insights such as what is Deepfake, who are responsible for this, is there any benefits of Deepfake and what are the challenges of this technology. We have also examined several creation and detection techniques. Our study revealed that although Deepfake is a threat to our societies, proper measures and strict regulations could prevent this

    Impact of Deepfake Technology on Digital World Authenticity: A Review

    Get PDF
    Deep fake technology is an emerging technology that creates fake videos by using artificial intelligence (AI) with the facial expression and lips sing effect. Deep fake technology is widely used in different scenarios with different objectives. Deep fake technology is used to make a highly realistic fake video that can be widely used to spread the wrong information or fake news by regarding any celebrity or political leader which is not created by them. Due to the high impact of social media, these fake videos can reach millions of views within an hour and create a negative impact on our society. This technology can be used by criminals to threaten society by making such deep fake (AI) videos. The results suggest that deepfakes are a threat to our celebrities, political system, religious beliefs, and business, they can be controlled by rules and regulations, strict corporate policy and awareness, education, and training to the common internet users. We need to develop a technology that can examine such types of video and be able to differentiate between real and fake video. Government agency also needs to create some policy to regulate such technology so that monitoring and controlling the use of this AI technology can be managed

    Assessment Framework for Deepfake Detection in Real-world Situations

    Full text link
    Detecting digital face manipulation in images and video has attracted extensive attention due to the potential risk to public trust. To counteract the malicious usage of such techniques, deep learning-based deepfake detection methods have been employed and have exhibited remarkable performance. However, the performance of such detectors is often assessed on related benchmarks that hardly reflect real-world situations. For example, the impact of various image and video processing operations and typical workflow distortions on detection accuracy has not been systematically measured. In this paper, a more reliable assessment framework is proposed to evaluate the performance of learning-based deepfake detectors in more realistic settings. To the best of our acknowledgment, it is the first systematic assessment approach for deepfake detectors that not only reports the general performance under real-world conditions but also quantitatively measures their robustness toward different processing operations. To demonstrate the effectiveness and usage of the framework, extensive experiments and detailed analysis of three popular deepfake detection methods are further presented in this paper. In addition, a stochastic degradation-based data augmentation method driven by realistic processing operations is designed, which significantly improves the robustness of deepfake detectors
    • …
    corecore