42 research outputs found

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    Review of steganalysis of digital images

    Get PDF
    Steganography is the science and art of embedding hidden messages into cover multimedia such as text, image, audio and video. Steganalysis is the counterpart of steganography, which wants to identify if there is data hidden inside a digital medium. In this study, some specific steganographic schemes such as HUGO and LSB are studied and the steganalytic schemes developed to steganalyze the hidden message are studied. Furthermore, some new approaches such as deep learning and game theory, which have seldom been utilized in steganalysis before, are studied. In the rest of thesis study some steganalytic schemes using textural features including the LDP and LTP have been implemented

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity

    Passive Techniques for Detecting and Locating Manipulations in Digital Images

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 19-11-2020El numero de camaras digitales integradas en dispositivos moviles as como su uso en la vida cotidiana esta en continuo crecimiento. Diariamente gran cantidad de imagenes digitales, generadas o no por este tipo de dispositivos, circulan en Internet o son utilizadas como evidencias o pruebas en procesos judiciales. Como consecuencia, el analisis forense de imagenes digitales cobra importancia en multitud de situaciones de la vida real. El analisis forense de imagenes digitales se divide en dos grandes ramas: autenticidad de imagenes digitales e identificacion de la fuente de adquisicion de una imagen. La primera trata de discernir si una imagen ha sufrido algun procesamiento posterior al de su creacion, es decir, que no haya sido manipulada. La segunda pretende identificar el dispositivo que genero la imagen digital. La verificacion de la autenticidad de imagenes digitales se puedellevar a cabo mediante tecnicas activas y tecnicas pasivas de analisis forense. Las tecnicas activas se fundamentan en que las imagenes digitales cuentan con \marcas" presentes desde su creacion, de forma que cualquier tipo de alteracion que se realice con posterioridad a su generacion, modificara las mismas, y, por tanto, permitiran detectar si ha existido un posible post-proceso o manipulacion...The number of digital cameras integrated into mobile devices as well as their use in everyday life is continuously growing. Every day a large number of digital images, whether generated by this type of device or not, circulate on the Internet or are used as evidence in legal proceedings. Consequently, the forensic analysis of digital images becomes important in many real-life situations. Forensic analysis of digital images is divided into two main branches: authenticity of digital images and identi cation of the source of acquisition of an image. The first attempts to discern whether an image has undergone any processing subsequent to its creation, i.e. that it has not been manipulated. The second aims to identify the device that generated the digital image. Verification of the authenticity of digital images can be carried out using both active and passive forensic analysis techniques. The active techniques are based on the fact that the digital images have "marks"present since their creation so that any type of alteration made after their generation will modify them, and therefore will allow detection if there has been any possible post-processing or manipulation. On the other hand, passive techniques perform the analysis of authenticity by extracting characteristics from the image...Fac. de InformáticaTRUEunpu

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Copy-move forgery detection in digital images

    Get PDF
    The ready availability of image-editing software makes it important to ensure the authenticity of images. This thesis concerns the detection and localization of cloning, or Copy-Move Forgery (CMF), which is the most common type of image tampering, in which part(s) of the image are copied and pasted back somewhere else in the same image. Post-processing can be used to produce more realistic doctored images and thus can increase the difficulty of detecting forgery. This thesis presents three novel methods for CMF detection, using feature extraction, surface fitting and segmentation. The Dense Scale Invariant Feature Transform (DSIFT) has been improved by using a different method to estimate the canonical orientation of each circular block. The Fitting Function Rotation Invariant Descriptor (FFRID) has been developed by using the least squares method to fit the parameters of a quadratic function on each block curvatures. In the segmentation approach, three different methods were tested: the SLIC superpixels, the Bag of Words Image and the Rolling Guidance filter with the multi-thresholding method. We also developed the Segment Gradient Orientation Histogram (SGOH) to describe the gradient of irregularly shaped blocks (segments). The experimental results illustrate that our proposed algorithms can detect forgery in images containing copy-move objects with different types of transformation (translation, rotation, scaling, distortion and combined transformation). Moreover, the proposed methods are robust to post-processing (i.e. blurring, brightness change, colour reduction, JPEG compression, variations in contrast and added noise) and can detect multiple duplicated objects. In addition, we developed a new method to estimate the similarity threshold for each image by optimizing a cost function based probability distribution. This method can detect CMF better than using a fixed threshold for all the test images, because our proposed method reduces the false positive and the time required to estimate one threshold for different images in the dataset. Finally, we used the hysteresis to decrease the number of false matches and produce the best possible result

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book
    corecore