65 research outputs found

    Exposing Digital Forgeries in Ballistic Motion

    Full text link

    Forensic analysis of video file formats

    Get PDF
    AbstractVideo file format standards define only a limited number of mandatory features and leave room for interpretation. Design decisions of device manufacturers and software vendors are thus a fruitful resource for forensic video authentication. This paper explores AVI and MP4-like video streams of mobile phones and digital cameras in detail. We use customized parsers to extract all file format structures of videos from overall 19 digital camera models, 14 mobile phone models, and 6 video editing toolboxes. We report considerable differences in the choice of container formats, audio and video compression algorithms, acquisition parameters, and internal file structure. In combination, such characteristics can help to authenticate digital video files in forensic settings by distinguishing between original and post-processed videos, verifying the purported source of a file, or identifying the true acquisition device model or the processing software used for video processing

    FrameProv: Towards End-To-End Video Provenance

    Full text link
    Video feeds are often deliberately used as evidence, as in the case of CCTV footage; but more often than not, the existence of footage of a supposed event is perceived as proof of fact in the eyes of the public at large. This reliance represents a societal vulnerability given the existence of easy-to-use editing tools and means to fabricate entire video feeds using machine learning. And, as the recent barrage of fake news and fake porn videos have shown, this isn't merely an academic concern, it is actively been exploited. I posit that this exploitation is only going to get more insidious. In this position paper, I introduce a long term project that aims to mitigate some of the most egregious forms of manipulation by embedding trustworthy components in the video transmission chain. Unlike earlier works, I am not aiming to do tamper detection or other forms of forensics -- approaches I think are bound to fail in the face of the reality of necessary editing and compression -- instead, the aim here is to provide a way for the video publisher to prove the integrity of the video feed as well as make explicit any edits they may have performed. To do this, I present a novel data structure, a video-edit specification language and supporting infrastructure that provides end-to-end video provenance, from the camera sensor to the viewer. I have implemented a prototype of this system and am in talks with journalists and video editors to discuss the best ways forward with introducing this idea to the mainstream

    An Overview on Image Forensics

    Get PDF
    The aim of this survey is to provide a comprehensive overview of the state of the art in the area of image forensics. These techniques have been designed to identify the source of a digital image or to determine whether the content is authentic or modified, without the knowledge of any prior information about the image under analysis (and thus are defined as passive). All these tools work by detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the digital image by the acquisition device and by any other operation after its creation. The paper has been organized by classifying the tools according to the position in the history of the digital image in which the relative footprint is left: acquisition-based methods, coding-based methods, and editing-based schemes

    Video copy-move forgery detection scheme based on displacement paths

    Get PDF
    Sophisticated digital video editing tools has made it easier to tamper real videos and create perceptually indistinguishable fake ones. Even worse, some post-processing effects, which include object insertion and deletion in order to mimic or hide a specific event in the video frames, are also prevalent. Many attempts have been made to detect such as video copy-move forgery to date; however, the accuracy rates are still inadequate and rooms for improvement are wide-open and its effectiveness is confined to the detection of frame tampering and not localization of the tampered regions. Thus, a new detection scheme was developed to detect forgery and improve accuracy. The scheme involves seven main steps. First, it converts the red, green and blue (RGB) video into greyscale frames and treats them as images. Second, it partitions each frame into non-overlapping blocks of sized 8x8 pixels each. Third, for each two successive frames (S2F), it tracks every block’s duplicate using the proposed two-tier detection technique involving Diamond search and Slantlet transform to locate the duplicated blocks. Fourth, for each pair of the duplicated blocks of the S2F, it calculates a displacement using optical flow concept. Fifth, based on the displacement values and empirically calculated threshold, the scheme detects existence of any deleted objects found in the frames. Once completed, it then extracts the moving object using the same threshold-based approach. Sixth, a frame-by-frame displacement tracking is performed to trace the object movement and find a displacement path of the moving object. The process is repeated for another group of frames to find the next displacement path of the second moving object until all the frames are exhausted. Finally, the displacement paths are compared between each other using Dynamic Time Warping (DTW) matching algorithm to detect the cloning object. If any pair of the displacement paths are perfectly matched then a clone is found. To validate the process, a series of experiments based on datasets from Surrey University Library for Forensic Analysis (SULFA) and Video Tampering Dataset (VTD) were performed to gauge the performance of the proposed scheme. The experimental results of the detection scheme were very encouraging with an accuracy rate of 96.86%, which markedly outperformed the state-of-the-art methods by as much as 3.14%

    A review of digital video tampering: from simple editing to full synthesis.

    Get PDF
    Video tampering methods have witnessed considerable progress in recent years. This is partly due to the rapid development of advanced deep learning methods, and also due to the large volume of video footage that is now in the public domain. Historically, convincing video tampering has been too labour intensive to achieve on a large scale. However, recent developments in deep learning-based methods have made it possible not only to produce convincing forged video but also to fully synthesize video content. Such advancements provide new means to improve visual content itself, but at the same time, they raise new challenges for state-of-the-art tampering detection methods. Video tampering detection has been an active field of research for some time, with periodic reviews of the subject. However, little attention has been paid to video tampering techniques themselves. This paper provides an objective and in-depth examination of current techniques related to digital video manipulation. We thoroughly examine their development, and show how current evaluation techniques provide opportunities for the advancement of video tampering detection. A critical and extensive review of photo-realistic video synthesis is provided with emphasis on deep learning-based methods. Existing tampered video datasets are also qualitatively reviewed and critically discussed. Finally, conclusions are drawn upon an exhaustive and thorough review of tampering methods with discussions of future research directions aimed at improving detection methods

    A new technique for video copy-move forgery detection

    Get PDF
    This thesis describes an algorithm for detecting copy-move falsifications in digital video. The thesis is composed of 5 chapters. In the first chapter there is an introduction to forgery detection for digital images and videos. Chapters 2, 3 and 4 describe in detail the techniques used for the implementation of the detection algorithm. The experimental results are presented in the fifth and last chapter
    corecore