445 research outputs found

    Video Inter-frame Forgery Detection Approach for Surveillance and Mobile Recorded Videos

    Get PDF
    We are living in an age where use of multimedia technologies like digital recorders and mobile phones is increasing rapidly. On the other hand, digital content manipulating softwares are also increasing making it easy for an individual to doctor the recorded content with trivial consumption of time and wealth. Digital multimedia forensics is gaining utmost importance to restrict unethical use of such easily available tampering techniques. These days, it is common for people to record videos using their smart phones. We have also witnessed a sudden growth in the use of surveillance cameras, which we see inhabiting almost every public location. Videos recorded using these devices usually contains crucial evidence of some event occurence and thereby most susceptible to inter-frame forgery which can be easily performed by insertion/removal/replication of frame(s). The proposed forensic technique enabled detection of inter-frame forgery in H.264 and MPEG-2 encoded videos especially mobile recorded and surveillance videos. This novel method introduced objectivity for automatic detection and localization of tampering by utilizing prediction residual gradient and optical flow gradient. Experimental results showed that this technique can detect tampering with 90% true positive rate, regardless of the video codec and recording device utilized and number of frames tampered

    Video and Imaging, 2013-2016

    Get PDF

    Energy efficient hardware acceleration of multimedia processing tools

    Get PDF
    The world of mobile devices is experiencing an ongoing trend of feature enhancement and generalpurpose multimedia platform convergence. This trend poses many grand challenges, the most pressing being their limited battery life as a consequence of delivering computationally demanding features. The envisaged mobile application features can be considered to be accelerated by a set of underpinning hardware blocks Based on the survey that this thesis presents on modem video compression standards and their associated enabling technologies, it is concluded that tight energy and throughput constraints can still be effectively tackled at algorithmic level in order to design re-usable optimised hardware acceleration cores. To prove these conclusions, the work m this thesis is focused on two of the basic enabling technologies that support mobile video applications, namely the Shape Adaptive Discrete Cosine Transform (SA-DCT) and its inverse, the SA-IDCT. The hardware architectures presented in this work have been designed with energy efficiency in mind. This goal is achieved by employing high level techniques such as redundant computation elimination, parallelism and low switching computation structures. Both architectures compare favourably against the relevant pnor art in the literature. The SA-DCT/IDCT technologies are instances of a more general computation - namely, both are Constant Matrix Multiplication (CMM) operations. Thus, this thesis also proposes an algorithm for the efficient hardware design of any general CMM-based enabling technology. The proposed algorithm leverages the effective solution search capability of genetic programming. A bonus feature of the proposed modelling approach is that it is further amenable to hardware acceleration. Another bonus feature is an early exit mechanism that achieves large search space reductions .Results show an improvement on state of the art algorithms with future potential for even greater savings

    Recent Advances in Digital Image and Video Forensics, Anti-forensics and Counter Anti-forensics

    Full text link
    Image and video forensics have recently gained increasing attention due to the proliferation of manipulated images and videos, especially on social media platforms, such as Twitter and Instagram, which spread disinformation and fake news. This survey explores image and video identification and forgery detection covering both manipulated digital media and generative media. However, media forgery detection techniques are susceptible to anti-forensics; on the other hand, such anti-forensics techniques can themselves be detected. We therefore further cover both anti-forensics and counter anti-forensics techniques in image and video. Finally, we conclude this survey by highlighting some open problems in this domain

    A survey on passive digital video forgery detection techniques

    Get PDF
    Digital media devices such as smartphones, cameras, and notebooks are becoming increasingly popular. Through digital platforms such as Facebook, WhatsApp, Twitter, and others, people share digital images, videos, and audio in large quantities. Especially in a crime scene investigation, digital evidence plays a crucial role in a courtroom. Manipulating video content with high-quality software tools is easier, which helps fabricate video content more efficiently. It is therefore necessary to develop an authenticating method for detecting and verifying manipulated videos. The objective of this paper is to provide a comprehensive review of the passive methods for detecting video forgeries. This survey has the primary goal of studying and analyzing the existing passive techniques for detecting video forgeries. First, an overview of the basic information needed to understand video forgery detection is presented. Later, it provides an in-depth understanding of the techniques used in the spatial, temporal, and spatio-temporal domain analysis of videos, datasets used, and their limitations are reviewed. In the following sections, standard benchmark video forgery datasets and the generalized architecture for passive video forgery detection techniques are discussed in more depth. Finally, identifying loopholes in existing surveys so detecting forged videos much more effectively in the future are discussed

    Detecting Tampered Videos with Multimedia Forensics and Deep Learning

    Get PDF
    © 2019, Springer Nature Switzerland AG. User-Generated Content (UGC) has become an integral part of the news reporting cycle. As a result, the need to verify videos collected from social media and Web sources is becoming increasingly important for news organisations. While video verification is attracting a lot of attention, there has been limited effort so far in applying video forensics to real-world data. In this work we present an approach for automatic video manipulation detection inspired by manual verification approaches. In a typical manual verification setting, video filter outputs are visually interpreted by human experts. We use two such forensics filters designed for manual verification, one based on Discrete Cosine Transform (DCT) coefficients and a second based on video requantization errors, and combine them with Deep Convolutional Neural Networks (CNN) designed for image classification. We compare the performance of the proposed approach to other works from the state of the art, and discover that, while competing approaches perform better when trained with videos from the same dataset, one of the proposed filters demonstrates superior performance in cross-dataset settings. We discuss the implications of our work and the limitations of the current experimental setup, and propose directions for future research in this area

    Space-variant picture coding

    Get PDF
    PhDSpace-variant picture coding techniques exploit the strong spatial non-uniformity of the human visual system in order to increase coding efficiency in terms of perceived quality per bit. This thesis extends space-variant coding research in two directions. The first of these directions is in foveated coding. Past foveated coding research has been dominated by the single-viewer, gaze-contingent scenario. However, for research into the multi-viewer and probability-based scenarios, this thesis presents a missing piece: an algorithm for computing an additive multi-viewer sensitivity function based on an established eye resolution model, and, from this, a blur map that is optimal in the sense of discarding frequencies in least-noticeable- rst order. Furthermore, for the application of a blur map, a novel algorithm is presented for the efficient computation of high-accuracy smoothly space-variant Gaussian blurring, using a specialised filter bank which approximates perfect space-variant Gaussian blurring to arbitrarily high accuracy and at greatly reduced cost compared to the brute force approach of employing a separate low-pass filter at each image location. The second direction is that of artifi cially increasing the depth-of- field of an image, an idea borrowed from photography with the advantage of allowing an image to be reduced in bitrate while retaining or increasing overall aesthetic quality. Two synthetic depth of field algorithms are presented herein, with the desirable properties of aiming to mimic occlusion eff ects as occur in natural blurring, and of handling any number of blurring and occlusion levels with the same level of computational complexity. The merits of this coding approach have been investigated by subjective experiments to compare it with single-viewer foveated image coding. The results found the depth-based preblurring to generally be significantly preferable to the same level of foveation blurring

    A study on the impact of AL-FEC techniques on TV over IP Quality of Experience

    Get PDF
    Abstract In this contribution, an evaluation of the effectiveness of Application Layer-Forward Error Correction (AL-FEC) scheme in video communications over unreliable channels is presented. In literature, several AL-FEC techniques for reducing the effect of noisy transmission on multimedia communication have been adopted. Recently, their use has been proposed for inclusion in TV over IP broadcasting international standards. The objective of the analysis performed in this paper is to verify the effectiveness of AL-FEC techniques in terms of perceived Quality of Service (QoS) and more in general of Quality of Experience (QoE), and to evaluate the trade-off between AL-FEC redundancy and video quality degradation for a given packet loss ratio. To this goal, several channel error models are investigated (random i.i.d. losses, burst losses, and network congestions) on test sequences encoded at 2 and 4 Mbps. The perceived quality is evaluated by means of three quality metrics: the full-reference objective quality metric NTIA-VQM combined with the ITU-T Rec. G.1070, the full-reference DMOS-KPN metric, and the pixel-wise error comparison performed by using the PSNR distortion measure. A post-processing synchronization between the original and the reconstructed stream has also been designed for improving the fidelity of the performed quality measures. The experimental results show the effectiveness and the limits of the Application Layer protection schemes
    corecore