77 research outputs found
Sensor Pattern Noise Estimation Based on Improved Locally Adaptive DCT Filtering and Weighted Averaging for Source Camera Identification and Verification
Photo Response Non-Uniformity (PRNU) noise is a sensor pattern noise characterizing the imaging device. It has been broadly used in the literature for source camera identification and image authentication. The abundant information that the sensor pattern noise carries in terms of the frequency content makes it unique, and hence suitable for identifying the source camera and detecting image forgeries. However, the PRNU extraction process is inevitably faced with the presence of image-dependent information as well as other non-unique noise components. To reduce such undesirable effects, researchers have developed a number of techniques in different stages of the process, i.e., the filtering stage, the estimation stage, and the post-estimation stage. In this paper, we present a new PRNU-based source camera identification and verification system and propose enhancements in different stages. First, an improved version of the Locally Adaptive Discrete Cosine Transform (LADCT) filter is proposed in the filtering stage. In the estimation stage, a new Weighted Averaging (WA) technique is presented. The post-estimation stage consists of concatenating the PRNUs estimated from color planes in order to exploit the presence of physical PRNU components in different channels. Experimental results on two image datasets acquired by various camera devices have shown a significant gain obtained with the proposed enhancements in each stage as well as the superiority of the overall system over related state-of-the-art systems
Perceptual Video Hashing for Content Identification and Authentication
Perceptual hashing has been broadly used in the literature to identify similar contents for video copy detection. It has also been adopted to detect malicious manipulations for video authentication. However, targeting both applications with a single system using the same hash would be highly desirable as this saves the storage space and reduces the computational complexity. This paper proposes a perceptual video hashing system for content identification and authentication. The objective is to design a hash extraction technique that can withstand signal processing operations on one hand and detect malicious attacks on the other hand. The proposed system relies on a new signal calibration technique for extracting the hash using the discrete cosine transform (DCT) and the discrete sine transform (DST). This consists of determining the number of samples, called the normalizing shift, that is required for shifting a digital signal so that the shifted version matches a certain pattern according to DCT/DST coefficients. The rationale for the calibration idea is that the normalizing shift resists signal processing operations while it exhibits sensitivity to local tampering (i.e., replacing a small portion of the signal with a different one). While the same hash serves both applications, two different similarity measures have been proposed for video identification and authentication, respectively. Through intensive experiments with various types of video distortions and manipulations, the proposed system has been shown to outperform related state-of-the art video hashing techniques in terms of identification and authentication with the advantageous ability to locate tampered regions
On the Sensor Pattern Noise Estimation in Image Forensics: A Systematic Empirical Evaluation
Extracting a fingerprint of a digital camera has fertile applications in image forensics, such as source camera identification and image authentication. In the last decade, Photo Response Non_Uniformity (PRNU) has been well established as a reliable unique fingerprint of digital imaging devices. The PRNU noise appears in every image as a very weak signal, and its reliable estimation is crucial for the success rate of the forensic application. In this paper, we present a novel methodical evaluation of 21 state-of-the-art PRNU estimation/enhancement techniques that have been proposed in the literature in various frameworks. The techniques are classified and systematically compared based on their role/stage in the PRNU estimation procedure, manifesting their intrinsic impacts. The performance of each technique is extensively demonstrated over a large-scale experiment to conclude this case-sensitive study. The experiments have been conducted on our created database and a public image database, the 'Dresden image databas
Dissimilarity Gaussian Mixture Models for Efficient Offline Handwritten Text-Independent Identification using SIFT and RootSIFT Descriptors
Handwriting biometrics is the science of identifying the behavioural aspect of an individual’s writing style and exploiting it to develop automated writer identification and verification systems. This paper presents an efficient handwriting identification system which combines Scale Invariant Feature Transform (SIFT) and RootSIFT descriptors in a set of Gaussian mixture models (GMM). In particular, a new concept of similarity and dissimilarity Gaussian mixture models (SGMM and DGMM) is introduced. While a SGMM is constructed for every writer to describe the intra-class similarity that is exhibited between the handwritten texts of the same writer, a DGMM represents the contrast or dissimilarity that exists between the writer’s style on one hand and other different handwriting styles on the other hand. Furthermore, because the handwritten text is described by a number of key point descriptors where each descriptor generates a SGMM/DGMM score, a new weighted histogram method is proposed to derive the intermediate prediction score for each writer’s GMM. The idea of weighted histogram exploits the fact that handwritings from the same writer should exhibit more similar textual patterns than dissimilar ones, hence, by penalizing the bad scores with a cost function, the identification rate can be significantly enhanced. Our proposed system has been extensively assessed using six different public datasets (including three English, two Arabic and one hybrid language) and the results have shown the superiority of the proposed system over state-of-the-art techniques
Content Fragile Watermarking for H.264/AVC Video Authentication
Discrete Cosine transform (DCT) to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors (MVs) The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each Group of Pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations, is confirmed
Robust off-line text independent writer identification using bagged discrete cosine transform features
Efficient writer identification systems identify the authorship of an unknown sample of text with high confidence. This has made automatic writer identification a very important topic of research for forensic document analysis. In this paper, we propose a robust system for offline text independent writer identification using bagged discrete cosine transform (BDCT) descriptors. Universal codebooks are first used to generate multiple predictor models. A final decision is then obtained by using the majority voting rule from these predictor models. The BDCT approach allows for DCT features to be effectively exploited for robust hand writer identification. The proposed system has first been assessed on the original version of hand written documents of various datasets and results have shown comparable performance with state-of-the-art systems. Next, blurry and noisy documents of two different datasets have been considered through intensive experiments where the system has been shown to perform significantly better than its competitors. To the best of our knowledge this is the first work that addresses the robustness aspect in automatic hand writer identification. This is particularly suitable in digital forensics as the documents acquired by the analyst may not be in ideal conditions
A novel image enhancement method for palm vein images
Palm vein images usually suffer from low contrast due to skin surface scattering the radiance of NIR light and image sensor limitations, hence require employing various techniques to enhance the contrast of the image prior to feature extraction. This paper presents a novel image enhancement method referred to as Multiple Overlapping Tiles (MOT) which adaptively stretches the local contrast of palm vein images using multiple layers of overlapping image tiles. The experiments conducted on the CASIA palm vein image dataset demonstrate that the MOT method retains the finer subspace details of vein images which allows excellent feature detection and matching with SIFT and RootSIFT features. Results on existing palm vein recognition systems demonstrate that the proposed MOT method delivers lower EER values outperforming other existing palm vein image enhancement methods
- …