16 research outputs found
Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation
The photo-response non-uniformity (PRNU) is a distinctive image sensor
characteristic, and an imaging device inadvertently introduces its sensor's
PRNU into all media it captures. Therefore, the PRNU can be regarded as a
camera fingerprint and used for source attribution. The imaging pipeline in a
camera, however, involves various processing steps that are detrimental to PRNU
estimation. In the context of photographic images, these challenges are
successfully addressed and the method for estimating a sensor's PRNU pattern is
well established. However, various additional challenges related to generation
of videos remain largely untackled. With this perspective, this work introduces
methods to mitigate disruptive effects of widely deployed H.264 and H.265 video
compression standards on PRNU estimation. Our approach involves an intervention
in the decoding process to eliminate a filtering procedure applied at the
decoder to reduce blockiness. It also utilizes decoding parameters to develop a
weighting scheme and adjust the contribution of video frames at the macroblock
level to PRNU estimation process. Results obtained on videos captured by 28
cameras show that our approach increases the PRNU matching metric up to more
than five times over the conventional estimation method tailored for photos
A Modified Fourier-Mellin Approach for Source Device Identification on Stabilized Videos
To decide whether a digital video has been captured by a given device,
multimedia forensic tools usually exploit characteristic noise traces left by
the camera sensor on the acquired frames. This analysis requires that the noise
pattern characterizing the camera and the noise pattern extracted from video
frames under analysis are geometrically aligned. However, in many practical
scenarios this does not occur, thus a re-alignment or synchronization has to be
performed. Current solutions often require time consuming search of the
realignment transformation parameters. In this paper, we propose to overcome
this limitation by searching scaling and rotation parameters in the frequency
domain. The proposed algorithm tested on real videos from a well-known
state-of-the-art dataset shows promising results
PRNU Estimation based on Weighted Averaging for Source Smartphone Video Identification
Photo response non-uniformity (PRNU) noise is a sensor pattern noise characterizing imperfections in the imaging device. The PRNU is a unique noise for each sensor device, and it has been generally utilized in the literature for source camera identification and image authentication. In video forensics, the traditional approach estimates the PRNU by averaging a set of residual signals obtained from multiple video frames. However, due to lossy compression and other non-unique content-dependent noise components that interfere with the video data, constant averaging does not take into account the intensity of these undesirable noise components which are content-dependent. Different from the traditional approach, we propose a video PRNU estimation method based on weighted averaging. The noise residual is first extracted for each single video. Then, the estimated noise residuals are fed into a weighted averaging method to optimize PRNU estimation. Experimental results on two video datasets captured by various smartphone devices have shown a significant gain obtained with the proposed approach over the conventional state-of-the-art one
Video Source Forensics for IoT Devices Based on Convolutional Neural Networks
With the wide application of Internet of things devices and the rapid development of multimedia technology, digital video has become one of the important information dissemination carriers among Internet of things devices, and it has been widely used in many fields such as news media, digital forensics and so on. However, the current video editing technology is constantly developing and improving, which seriously threatens the integrity and authenticity of digital video. Therefore, the research on digital video forensics has a great significance. In this paper, a new video source passive forensics algorithm based on Convolutional Neural Networks(CNN) is proposed. CNN is used to classify the maximum information block of specified size in video I frame, and then the classification results are fused to determine the camera to which the video belongs. Experimental results show that the recognition algorithm proposed in this paper has a better performance than other methods in trems of accuracy and ROC curve. And our method still can have a good recognition effect even if a small number of I frames are used for recognition
Sensor Pattern Noise Estimation using Non-textured Video Frames For Efficient Source Smartphone Identification and Verification
Photo response non-uniformity (PRNU) noise is a sensor pattern noise characterizing the imaging device. It has been broadly used in the literature for image authentication and source camera identification. The abundant information that the PRNU carries in terms of the frequency content makes it unique, and therefore suitable for identifying the source camera and detecting forgeries in digital images. However, PRNU estimation from smartphone videos is a challenging process due to the presence of frame-dependent information (very dark/very textured), as well as other non-unique noise components and distortions due to lossy compression. In this paper, we propose an approach that considers only the non-textured frames in estimating the PRNU because its estimation in highly textured images has been proven to be inaccurate in image forensics. Furthermore, lossy compression distortions tend to affect mainly the textured and high activity regions and consequently weakens the presence of the PRNU in such areas. The proposed technique uses a number of texture measures obtained from the Grey Level Cooccurrence Matrix (GLCM) prior to an unsupervised learning process that splits the feature space through training video frames into two different sub-spaces, i.e., the textured space and the non-textured space. Non-textured video frames are filtered out and used for estimating the PRNU. Experimental results on a public video dataset captured by various smartphone devices have shown a significant gain obtained with the proposed approach over the conventional state-of-the-art approach