14 research outputs found

    Impact of Deepfake Technology on Digital World Authenticity: A Review

    Get PDF
    Deep fake technology is an emerging technology that creates fake videos by using artificial intelligence (AI) with the facial expression and lips sing effect. Deep fake technology is widely used in different scenarios with different objectives. Deep fake technology is used to make a highly realistic fake video that can be widely used to spread the wrong information or fake news by regarding any celebrity or political leader which is not created by them. Due to the high impact of social media, these fake videos can reach millions of views within an hour and create a negative impact on our society. This technology can be used by criminals to threaten society by making such deep fake (AI) videos. The results suggest that deepfakes are a threat to our celebrities, political system, religious beliefs, and business, they can be controlled by rules and regulations, strict corporate policy and awareness, education, and training to the common internet users. We need to develop a technology that can examine such types of video and be able to differentiate between real and fake video. Government agency also needs to create some policy to regulate such technology so that monitoring and controlling the use of this AI technology can be managed

    Analysis of Deep-Fake Technology Impacting Digital World Credibility: A Comprehensive Literature Review

    Get PDF
    Deep-Fake Technique is a new scientific method that uses Artificial-Intelligince to make fake videos with an affect of facial expressions and coordinated movement of lips. This technology is frequently employed in a variety of contexts with various goals. Deep-Fake technology is being used to generate an extremely realistic fake video that can be widely distributed to promote false information or fake news about any celebrity or leader that was not created by them. Because of the widespread use of social media, these fraudulent videos can garner billions of views in under an hour and have a significant impact on our culture. Deep-Fakes are a threat to our celebrities, democracy, religious views, and commerce, according to the findings, but they can be managed through rules and regulations, strong company policy, and general internet user awareness and education. We need to devise a process for examining such video and distinguishing between actual and fraudulent footage

    Source Anonymization of Digital Images: A Counterā€“Forensic Attack on PRNU based Source Identification Techniques

    Get PDF
    A lot of photographers and human rights advocates need to hide their identity while sharing their images on the internet. Hence, sourceā€“anonymization of digital images has become a critical issue in the present digital age. The current literature contains a number of digital forensic techniques for ā€œsourceā€“identiļ¬cationā€ of digital images, one of the most eļ¬ƒcient of them being Photoā€“Response Nonā€“Uniformity (PRNU) sensor noise pattern based source detection. PRNU noise pattern being unique to every digital camera, such techniques prove to be highly robust way of sourceā€“identiļ¬cation. In this paper, we propose a counterā€“forensic technique to mislead this PRNU sensor noise pattern based sourceā€“identiļ¬cation, by using a median ļ¬lter to suppress PRNU noise in an image, iteratively. Our experimental results prove that the proposed method achieves considerably higher degree of source anonymity, measured as an inverse of Peakā€“toā€“Correlation Energy (PCE) ratio, as compared to the stateā€“ofā€“theā€“art

    Identification and Exploitation of Inadvertent Spectral Artifacts in Digital Audio

    Get PDF
    We show that modulation products from local oscillators in a variety of commercial camcorders are coupled into the recorded audio track, creating narrow band time invariant spectral features. These spectral features, left largely intact by transcoding, compression and other forms of audiovisual post processing, can encode characteristics of specific camcorders used to capture the audio files, including the make and model. Using data sets both downloaded from YouTube and collected under controlled laboratory conditions we demonstrate an average probability of detection (Pd) approaching 0.95 for identification of a specific camcorder in a population of thousands of similar recordings, with a probability of false alarm (Pfa) of about 0.11. We also demonstrate an average Pd of about 0.93 for correct association of make and model of camcorder based on comparison of audio spectral features extracted from random YouTube downloads compared to a reference library of spectral features captured from known makes and models of camcorders, with a Pfa of 0.06. The method described can be used independently or synergistically with image plane-based techniques such as those based upon Photo Response Non-Uniformity

    DIPPAS: A Deep Image Prior PRNU Anonymization Scheme

    Get PDF
    Source device identification is an important topic in image forensics since it allows to trace back the origin of an image. Its forensics counter-part is source device anonymization, that is, to mask any trace on the image that can be useful for identifying the source device. A typical trace exploited for source device identification is the Photo Response Non-Uniformity (PRNU), a noise pattern left by the device on the acquired images. In this paper, we devise a methodology for suppressing such a trace from natural images without significant impact on image quality. Specifically, we turn PRNU anonymization into an optimization problem in a Deep Image Prior (DIP) framework. In a nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an image that is anonymized with respect to the source PRNU, still maintaining high visual quality. With respect to widely-adopted deep learning paradigms, our proposed CNN is not trained on a set of input-target pairs of images. Instead, it is optimized to reconstruct the PRNU-free image from the original image under analysis itself. This makes the approach particularly suitable in scenarios where large heterogeneous databases are analyzed and prevents any problem due to lack of generalization. Through numerical examples on publicly available datasets, we prove our methodology to be effective compared to state-of-the-art techniques
    corecore