63 research outputs found

    Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries

    Full text link
    With advanced image journaling tools, one can easily alter the semantic meaning of an image by exploiting certain manipulation techniques such as copy-clone, object splicing, and removal, which mislead the viewers. In contrast, the identification of these manipulations becomes a very challenging task as manipulated regions are not visually apparent. This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones. Resampling features are used to capture artifacts like JPEG quality loss, upsampling, downsampling, rotation, and shearing. The proposed network exploits larger receptive fields (spatial maps) and frequency domain correlation to analyze the discriminative characteristics between manipulated and non-manipulated regions by incorporating encoder and LSTM network. Finally, decoder network learns the mapping from low-resolution feature maps to pixel-wise predictions for image tamper localization. With predicted mask provided by final layer (softmax) of the proposed architecture, end-to-end training is performed to learn the network parameters through back-propagation using ground-truth masks. Furthermore, a large image splicing dataset is introduced to guide the training process. The proposed method is capable of localizing image manipulations at pixel level with high precision, which is demonstrated through rigorous experimentation on three diverse datasets

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces

    Machine learning based digital image forensics and steganalysis

    Get PDF
    The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society. In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization. For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography. Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed. The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge

    Extracción y análisis de características para identificación, agrupamiento y modificación de la fuente de imágenes generadas por dispositivos móviles

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 02/10/2017.Nowadays, digital images play an important role in our society. The presence of mobile devices with integrated cameras is growing at an unrelenting pace, resulting in the majority of digital images coming from this kind of device. Technological development not only facilitates the generation of these images, but also the malicious manipulation of them. Therefore, it is of interest to have tools that allow the device that has generated a certain digital image to be identified. The digital image source can be identified through the features that the generating device permeates it with during the creation process. In recent years most research on techniques for identifying the source has focused solely on traditional cameras. The forensic analysis techniques of digital images generated by mobile devices are therefore of particular importance since they have specific characteristics which allow for better results, and forensic techniques for digital images generated by another kind of device are often not valid. This thesis provides various contributions in two of the main research lines of forensic analysis, the field of identification techniques and the counter-forensics or attacks on these techniques. In the field of digital image source acquisition identification techniques, both closed and open scenarios are addressed. In closed scenarios, the images whose acquisition source are to be determined belong to a group of devices known a priori. Meanwhile, an open scenario is one in which the images under analysis belong to a set of devices that is not known a priori by the fo rensic analyst. In this case, the objective is not t he concrete image acquisition source identification, but their classification into groups whose images all belong to the same mobile device. The image clustering t echniques are of particular interest in real situations since in many cases the forensic analyst does not know a priori which devices have generated certain images. Firstly, techniques for identifying the device type (computer, scanner or digital camera of the mobile device) or class (make and model) of the image acquisition source in mobile devices are proposed, which are two relevant branches of forensic analysis of mobile device images. An approach based on different types of image features and Support Vector Machine as a classifier is presented. Secondly, a technique for the ident ification in open scenarios that consists of grouping digital images of mobile devices according to the acquisition source is developed, that is to say, a class-grouping of all input images is performed. The proposal is based on the combination of hierarchical grouping and flat grouping using the Sensor Pattern Noise. Lastly, in the area of att acks on forensic t echniques, topics related to the robustness of the image source identificat ion forensic techniques are addressed. For this, two new algorithms based on the sensor noise and the wavelet transform are designed, one for the destruction of t he image identity and another for its fo rgery. Results obtained by the two algorithms were compared with other tools designed for the same purpose. It is worth mentioning that the solution presented in this work requires less amount and complexity of input data than the tools to which it was compared. Finally, these identification t echniques have been included in a tool for the forensic analysis of digital images of mobile devices called Theia. Among the different branches of forensic analysis, Theia focuses mainly on the trustworthy identification of make and model of the mobile camera that generated a given image. All proposed algorithms have been implemented and integrated in Theia thus strengthening its functionality.Actualmente las imágenes digitales desempeñan un papel importante en nuestra sociedad. La presencia de dispositivos móviles con cámaras fotográficas integradas crece a un ritmo imparable, provocando que la mayoría de las imágenes digitales procedan de este tipo de dispositivos. El desarrollo tecnológico no sólo facilita la generación de estas imágenes, sino también la manipulación malintencionada de éstas. Es de interés, por tanto, contar con herramientas que permitan identificar al dispositivo que ha generado una cierta imagen digital. La fuente de una imagen digital se puede identificar a través de los rasgos que el dispositivo que la genera impregna en ella durante su proceso de creación. La mayoría de las investigaciones realizadas en los últimos años sobre técnicas de identificación de la fuente se han enfocado únicamente en las cámaras tradicionales. Las técnicas de análisis forense de imágenes generadas por dispositivos móviles cobran, pues, especial importancia, ya que éstos presentan características específicas que permiten obtener mejores resultados, no siendo válidas muchas veces además las técnicas forenses para imágenes digitales generadas por otros tipos de dispositivos. La presente Tesis aporta diversas contribuciones en dos de las principales líneas del análisis forense: el campo de las t écnicas de identificación de la fuente de adquisición de imágenes digitales y las contramedidas o at aques a est as técnicas. En el primer campo se abordan tanto los escenarios cerrados como los abiertos. En el escenario denominado cerrado las imágenes cuya fuente de adquisición hay que determinar pertenecen a un grupo de dispositivos conocidos a priori. Por su parte, un escenario abierto es aquel en el que las imágenes pertenecen a un conjunto de dispositivos que no es conocido a priori por el analista forense. En este caso el obj etivo no es la identificación concreta de la fuente de adquisición de las imágenes, sino su clasificación en grupos cuyas imágenes pertenecen todas al mismo dispositivo móvil. Las técnicas de agrupamiento de imágenes son de gran interés en situaciones reales, ya que en muchos casos el analist a forense desconoce a priori cuáles son los dispositivos que generaron las imágenes. En primer lugar se presenta una técnica para la identificación en escenarios cerrados del tipo de dispositivo (computador, escáner o cámara digital de dispositivo móvil) o la marca y modelo de la fuente en dispositivos móviles, que son dos problemáticas relevantes del análisis forense de imágenes digitales. La propuesta muestra un enfoque basado en distintos tipos de características de la imagen y en una clasificación mediante máquinas de soporte vectorial. En segundo lugar se diseña una técnica para la identificación en escenarios abiertos que consiste en el agrupamiento de imágenes digitales de dispositivos móviles según la fuente de adquisición, es decir, se realiza un agrupamiento en clases de todas las imágenes de ent rada. La propuesta combina agrupamiento jerárquico y agrupamiento plano con el uso del patrón de ruido del sensor. Por último, en el área de los ataques a las técnicas fo renses se tratan temas relacionados con la robustez de las técnicas forenses de identificación de la fuente de adquisición de imágenes. Se especifican dos algoritmos basados en el ruido del sensor y en la transformada wavelet ; el primero destruye la identidad de una imagen y el segundo falsifica la misma. Los resultados obtenidos por estos dos algoritmos se comparan con otras herramientas diseñadas para el mismo fin, observándose que la solución aquí presentada requiere de menor cantidad y complejidad de datos de entrada. Finalmente, estas técnicas de identificación han sido incluidas en una herramienta para el análisis forense de imágenes digitales de dispositivos móviles llamada Theia. Entre las diferentes ramas del análisis forense, Theia se centra principalmente en la identificación confiable de la marca y el modelo de la cámara móvil que generó una imagen dada. Todos los algoritmos desarrollados han sido implementados e integrados en Theia, reforzando así su funcionalidad.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    • …
    corecore