807 research outputs found

    Print-Scan Resilient Text Image Watermarking Based on Stroke Direction Modulation for Chinese Document Authentication

    Get PDF
    Print-scan resilient watermarking has emerged as an attractive way for document security. This paper proposes an stroke direction modulation technique for watermarking in Chinese text images. The watermark produced by the idea offers robustness to print-photocopy-scan, yet provides relatively high embedding capacity without losing the transparency. During the embedding phase, the angle of rotatable strokes are quantized to embed the bits. This requires several stages of preprocessing, including stroke generation, junction searching, rotatable stroke decision and character partition. Moreover, shuffling is applied to equalize the uneven embedding capacity. For the data detection, denoising and deskewing mechanisms are used to compensate for the distortions induced by hardcopy. Experimental results show that our technique attains high detection accuracy against distortions resulting from print-scan operations, good quality photocopies and benign attacks in accord with the future goal of soft authentication

    Fingerabdruckswachstumvorhersage, Bildvorverarbeitung und Multi-level Judgment Aggregation

    Get PDF
    Im ersten Teil dieser Arbeit wird Fingerwachstum untersucht und eine Methode zur Vorhersage von Wachstum wird vorgestellt. Die EffektivitĂ€t dieser Methode wird mittels mehrerer Tests validiert. Vorverarbeitung von Fingerabdrucksbildern wird im zweiten Teil behandelt und neue Methoden zur SchĂ€tzung des Orientierungsfelds und der Ridge-Frequenz sowie zur Bildverbesserung werden vorgestellt: Die Line Sensor Methode zur OrientierungsfeldschĂ€tzung, gebogene Regionen zur Ridge-Frequenz-SchĂ€tzung und gebogene Gabor Filter zur Bildverbesserung. Multi-level Jugdment Aggregation wird eingefĂŒhrt als Design Prinzip zur Kombination mehrerer Methoden auf mehreren Verarbeitungsstufen. Schließlich wird Score Neubewertung vorgestellt, um Informationen aus der Vorverarbeitung mit in die Score Bildung einzubeziehen. Anhand eines Anwendungsbeispiels wird die Wirksamkeit dieses Ansatzes auf den verfĂŒgbaren FVC-Datenbanken gezeigt.Finger growth is studied in the first part of the thesis and a method for growth prediction is presented. The effectiveness of the method is validated in several tests. Fingerprint image preprocessing is discussed in the second part and novel methods for orientation field estimation, ridge frequency estimation and image enhancement are proposed: the line sensor method for orientation estimation provides more robustness to noise than state of the art methods. Curved regions are proposed for improving the ridge frequency estimation and curved Gabor filters for image enhancement. The notion of multi-level judgment aggregation is introduced as a design principle for combining different methods at all levels of fingerprint image processing. Lastly, score revaluation is proposed for incorporating information obtained during preprocessing into the score, and thus amending the quality of the similarity measure at the final stage. A sample application combines all proposed methods of the second part and demonstrates the validity of the approach by achieving massive verification performance improvements in comparison to state of the art software on all available databases of the fingerprint verification competitions (FVC)

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    Machine learning based digital image forensics and steganalysis

    Get PDF
    The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society. In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization. For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography. Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed. The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge

    Resiliency Assessment and Enhancement of Intrinsic Fingerprinting

    Get PDF
    Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Walk This Way: Footwear Recognition Using Images & Neural Networks

    Get PDF
    Footwear prints are one of the most commonly recovered in criminal investigations. They can be used to discover a criminal's identity and to connect various crimes. Nowadays, footwear recognition techniques take time to be processed due to the use of current methods to extract the shoe print layout such as platter castings, gel lifting, and 3D-imaging techniques. Traditional techniques are prone to human error and waste valuable investigative time, which can be a problem for timely investigations. In terms of 3D-imaging techniques, one of the issues is that footwear prints can be blurred or missing, which renders their recognition and comparison inaccurate by completely automated approaches. Hence, this research investigates a footwear recognition model based on camera RGB images of the shoe print taken directly from the investigation site to reduce the time and cost required for the investigative process. First, the model extracts the layout information of the evidence shoe print using known image processing techniques. The layout information is then sent to a hierarchical network of neural networks. Each layer of this network is examined in an attempt to process and recognize footwear features to eliminate and narrow down the possible matches until returning the final result to the investigator

    Vers l’anti-criminalistique en images numĂ©riques via la restauration d’images

    Get PDF
    Image forensics enjoys its increasing popularity as a powerful image authentication tool, working in a blind passive way without the aid of any a priori embedded information compared to fragile image watermarking. On its opponent side, image anti-forensics attacks forensic algorithms for the future development of more trustworthy forensics. When image coding or processing is involved, we notice that image anti-forensics to some extent shares a similar goal with image restoration. Both of them aim to recover the information lost during the image degradation, yet image anti-forensics has one additional indispensable forensic undetectability requirement. In this thesis, we form a new research line for image anti-forensics, by leveraging on advanced concepts/methods from image restoration meanwhile with integrations of anti-forensic strategies/terms. Under this context, this thesis contributes on the following four aspects for JPEG compression and median filtering anti-forensics: (i) JPEG anti-forensics using Total Variation based deblocking, (ii) improved Total Variation based JPEG anti-forensics with assignment problem based perceptual DCT histogram smoothing, (iii) JPEG anti-forensics using JPEG image quality enhancement based on a sophisticated image prior model and non-parametric DCT histogram smoothing based on calibration, and (iv) median filtered image quality enhancement and anti-forensics via variational deconvolution. Experimental results demonstrate the effectiveness of the proposed anti-forensic methods with a better forensic undetectability against existing forensic detectors as well as a higher visual quality of the processed image, by comparisons with the state-of-the-art methods.La criminalistique en images numĂ©riques se dĂ©veloppe comme un outil puissant pour l'authentification d'image, en travaillant de maniĂšre passive et aveugle sans l'aide d'informations d'authentification prĂ©-intĂ©grĂ©es dans l'image (contrairement au tatouage fragile d'image). En parallĂšle, l'anti-criminalistique se propose d'attaquer les algorithmes de criminalistique afin de maintenir une saine Ă©mulation susceptible d'aider Ă  leur amĂ©lioration. En images numĂ©riques, l'anti-criminalistique partage quelques similitudes avec la restauration d'image : dans les deux cas, l'on souhaite approcher au mieux les informations perdues pendant un processus de dĂ©gradation d'image. Cependant, l'anti-criminalistique se doit de remplir au mieux un objectif supplĂ©mentaire, extit{i.e.} : ĂȘtre non dĂ©tectable par la criminalistique actuelle. Dans cette thĂšse, nous proposons une nouvelle piste de recherche pour la criminalistique en images numĂ©riques, en tirant profit des concepts/mĂ©thodes avancĂ©s de la restauration d'image mais en intĂ©grant des stratĂ©gies/termes spĂ©cifiquement anti-criminalistiques. Dans ce contexte, cette thĂšse apporte des contributions sur quatre aspects concernant, en criminalistique JPEG, (i) l'introduction du dĂ©blocage basĂ© sur la variation totale pour contrer les mĂ©thodes de criminalistique JPEG et (ii) l'amĂ©lioration apportĂ©e par l'adjonction d'un lissage perceptuel de l'histogramme DCT, (iii) l'utilisation d'un modĂšle d'image sophistiquĂ© et d'un lissage non paramĂ©trique de l'histogramme DCT visant l'amĂ©lioration de la qualitĂ© de l'image falsifiĂ©e; et, en criminalistique du filtrage mĂ©dian, (iv) l'introduction d'une mĂ©thode fondĂ©e sur la dĂ©convolution variationnelle. Les rĂ©sultats expĂ©rimentaux dĂ©montrent l'efficacitĂ© des mĂ©thodes anti-criminalistiques proposĂ©es, avec notamment une meilleure indĂ©tectabilitĂ© face aux dĂ©tecteurs criminalistiques actuels ainsi qu'une meilleure qualitĂ© visuelle de l'image falsifiĂ©e par rapport aux mĂ©thodes anti-criminalistiques de l'Ă©tat de l'art
    • 

    corecore