258 research outputs found

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Fundamental Limits in Multimedia Forensics and Anti-forensics

    Get PDF
    As the use of multimedia editing tools increases, people become questioning the authenticity of multimedia content. This is specially a big concern for authorities, such as law enforcement, news reporter and government, who constantly use multimedia evidence to make critical decisions. To verify the authenticity of multimedia content, many forensic techniques have been proposed to identify the processing history of multimedia content under question. However, as new technologies emerge and more complicated scenarios are considered, the limitation of multimedia forensics has been gradually realized by forensic researchers. It is the inevitable trend in multimedia forensics to explore the fundamental limits. In this dissertation, we propose several theoretical frameworks to study the fundamental limits in various forensic problems. Specifically, we begin by developing empirical forensic techniques to deal with the limitation of existing techniques due to the emergence of new technology, compressive sensing. Then, we go one step further to explore the fundamental limit of forensic performance. Two types of forensic problems have been examined. In operation forensics, we propose an information theoretical framework and define forensicability as the maximum information features contain about hypotheses of processing histories. Based on this framework, we have found the maximum number of JPEG compressions one can detect. In order forensics, an information theoretical criterion is proposed to determine when we can and cannot detect the order of manipulation operations that have been applied on multimedia content. Additionally, we have examined the fundamental tradeoffs in multimedia antiforensics, where attacking techniques are developed by forgers to conceal manipulation fingerprints and confuse forensic investigations. In this field, we have defined concealability as the effectiveness of anti-forensics concealing manipulation fingerprints. Then, a tradeoff between concealability, rate and distortion is proposed and characterized for compression anti-forensics, which provides us valuable insights of how forgers may behave under their best strategy

    Colour depth-from-defocus incorporating experimental point spread function measurements

    Get PDF
    Depth-From-Defocus (DFD) is a monocular computer vision technique for creating depth maps from two images taken on the same optical axis with different intrinsic camera parameters. A pre-processing stage for optimally converting colour images to monochrome using a linear combination of the colour planes has been shown to improve the accuracy of the depth map. It was found that the first component formed using Principal Component Analysis (PCA) and a technique to maximise the signal-to-noise ratio (SNR) performed better than using an equal weighting of the colour planes with an additive noise model. When the noise is non-isotropic the Mean Square Error (MSE) of the depth map by maximising the SNR was improved by 7.8 times compared to an equal weighting and 1.9 compared to PCA. The fractal dimension (FD) of a monochrome image gives a measure of its roughness and an algorithm was devised to maximise its FD through colour mixing. The formulation using a fractional Brownian motion (mm) model reduced the SNR and thus produced depth maps that were less accurate than using PCA or an equal weighting. An active DFD algorithm to reduce the image overlap problem has been developed, called Localisation through Colour Mixing (LCM), that uses a projected colour pattern. Simulation results showed that LCM produces a MSE 9.4 times lower than equal weighting and 2.2 times lower than PCA. The Point Spread Function (PSF) of a camera system models how a point source of light is imaged. For depth maps to be accurately created using DFD a high-precision PSF must be known. Improvements to a sub-sampled, knife-edge based technique are presented that account for non-uniform illumination of the light box and this reduced the MSE by 25%. The Generalised Gaussian is presented as a model of the PSF and shown to be up to 16 times better than the conventional models of the Gaussian and pillbox

    Non-destructive detection of counterfeit and substandard medicines using X-ray diffraction

    Get PDF
    The prevalence of counterfeit and substandard medicines has been growing rapidly over the past decade, and fast, non-destructive techniques for their detection are urgently needed to counter this trend. In this thesis, both energy-dispersive X-ray diffraction (EDXRD) and pixelated diffraction (“PixD”) combined with chemometric methods were assessed for their effectiveness in detecting poor-quality medicines within their packaging. Firstly, a series of caffeine, paracetamol and cellulose mixtures of known concentrations were pressed into tablets. EDXRD spectra of each tablet were collected both with and without packaging. Principal component analysis (PCA) and partial least-squares regression (PLSR) were used to study the data and construct calibration models for quantitative analysis. The concentration prediction errors for the packaged data were found to be very similar to those obtained in the unpackaged case, and were also on a par with reported values in the literature using higher-resolution angular-dispersive X-ray diffraction (ADXRD). Following this, soft independent modelling by class analogy (SIMCA) classification was used to compare EDXRD spectra from a test set of over-the-counter (OTC) medicines containing various combinations of active pharmaceutical ingredients (APIs) against PCA models constructed using spectra collected for paracetamol and ibuprofen samples. The test samples were selected to emulate different levels of difficulty in authenticating medicines correctly, ranging from completely different APIs (easy) to those with a small quantity of additional API (difficult). This classification study found that the sensitivity and specificity were optimal at data acquisition times on the order of 75~150s, and regardless of whether layers of blister and card packaging surrounded the tablet in question. This experiment was repeated on a novel, compact system incorporating a pixellated detector, which was found to reduce the required data acquisition times for optimal classification by a factor of five

    Extracción y análisis de características para identificación, agrupamiento y modificación de la fuente de imágenes generadas por dispositivos móviles

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 02/10/2017.Nowadays, digital images play an important role in our society. The presence of mobile devices with integrated cameras is growing at an unrelenting pace, resulting in the majority of digital images coming from this kind of device. Technological development not only facilitates the generation of these images, but also the malicious manipulation of them. Therefore, it is of interest to have tools that allow the device that has generated a certain digital image to be identified. The digital image source can be identified through the features that the generating device permeates it with during the creation process. In recent years most research on techniques for identifying the source has focused solely on traditional cameras. The forensic analysis techniques of digital images generated by mobile devices are therefore of particular importance since they have specific characteristics which allow for better results, and forensic techniques for digital images generated by another kind of device are often not valid. This thesis provides various contributions in two of the main research lines of forensic analysis, the field of identification techniques and the counter-forensics or attacks on these techniques. In the field of digital image source acquisition identification techniques, both closed and open scenarios are addressed. In closed scenarios, the images whose acquisition source are to be determined belong to a group of devices known a priori. Meanwhile, an open scenario is one in which the images under analysis belong to a set of devices that is not known a priori by the fo rensic analyst. In this case, the objective is not t he concrete image acquisition source identification, but their classification into groups whose images all belong to the same mobile device. The image clustering t echniques are of particular interest in real situations since in many cases the forensic analyst does not know a priori which devices have generated certain images. Firstly, techniques for identifying the device type (computer, scanner or digital camera of the mobile device) or class (make and model) of the image acquisition source in mobile devices are proposed, which are two relevant branches of forensic analysis of mobile device images. An approach based on different types of image features and Support Vector Machine as a classifier is presented. Secondly, a technique for the ident ification in open scenarios that consists of grouping digital images of mobile devices according to the acquisition source is developed, that is to say, a class-grouping of all input images is performed. The proposal is based on the combination of hierarchical grouping and flat grouping using the Sensor Pattern Noise. Lastly, in the area of att acks on forensic t echniques, topics related to the robustness of the image source identificat ion forensic techniques are addressed. For this, two new algorithms based on the sensor noise and the wavelet transform are designed, one for the destruction of t he image identity and another for its fo rgery. Results obtained by the two algorithms were compared with other tools designed for the same purpose. It is worth mentioning that the solution presented in this work requires less amount and complexity of input data than the tools to which it was compared. Finally, these identification t echniques have been included in a tool for the forensic analysis of digital images of mobile devices called Theia. Among the different branches of forensic analysis, Theia focuses mainly on the trustworthy identification of make and model of the mobile camera that generated a given image. All proposed algorithms have been implemented and integrated in Theia thus strengthening its functionality.Actualmente las imágenes digitales desempeñan un papel importante en nuestra sociedad. La presencia de dispositivos móviles con cámaras fotográficas integradas crece a un ritmo imparable, provocando que la mayoría de las imágenes digitales procedan de este tipo de dispositivos. El desarrollo tecnológico no sólo facilita la generación de estas imágenes, sino también la manipulación malintencionada de éstas. Es de interés, por tanto, contar con herramientas que permitan identificar al dispositivo que ha generado una cierta imagen digital. La fuente de una imagen digital se puede identificar a través de los rasgos que el dispositivo que la genera impregna en ella durante su proceso de creación. La mayoría de las investigaciones realizadas en los últimos años sobre técnicas de identificación de la fuente se han enfocado únicamente en las cámaras tradicionales. Las técnicas de análisis forense de imágenes generadas por dispositivos móviles cobran, pues, especial importancia, ya que éstos presentan características específicas que permiten obtener mejores resultados, no siendo válidas muchas veces además las técnicas forenses para imágenes digitales generadas por otros tipos de dispositivos. La presente Tesis aporta diversas contribuciones en dos de las principales líneas del análisis forense: el campo de las t écnicas de identificación de la fuente de adquisición de imágenes digitales y las contramedidas o at aques a est as técnicas. En el primer campo se abordan tanto los escenarios cerrados como los abiertos. En el escenario denominado cerrado las imágenes cuya fuente de adquisición hay que determinar pertenecen a un grupo de dispositivos conocidos a priori. Por su parte, un escenario abierto es aquel en el que las imágenes pertenecen a un conjunto de dispositivos que no es conocido a priori por el analista forense. En este caso el obj etivo no es la identificación concreta de la fuente de adquisición de las imágenes, sino su clasificación en grupos cuyas imágenes pertenecen todas al mismo dispositivo móvil. Las técnicas de agrupamiento de imágenes son de gran interés en situaciones reales, ya que en muchos casos el analist a forense desconoce a priori cuáles son los dispositivos que generaron las imágenes. En primer lugar se presenta una técnica para la identificación en escenarios cerrados del tipo de dispositivo (computador, escáner o cámara digital de dispositivo móvil) o la marca y modelo de la fuente en dispositivos móviles, que son dos problemáticas relevantes del análisis forense de imágenes digitales. La propuesta muestra un enfoque basado en distintos tipos de características de la imagen y en una clasificación mediante máquinas de soporte vectorial. En segundo lugar se diseña una técnica para la identificación en escenarios abiertos que consiste en el agrupamiento de imágenes digitales de dispositivos móviles según la fuente de adquisición, es decir, se realiza un agrupamiento en clases de todas las imágenes de ent rada. La propuesta combina agrupamiento jerárquico y agrupamiento plano con el uso del patrón de ruido del sensor. Por último, en el área de los ataques a las técnicas fo renses se tratan temas relacionados con la robustez de las técnicas forenses de identificación de la fuente de adquisición de imágenes. Se especifican dos algoritmos basados en el ruido del sensor y en la transformada wavelet ; el primero destruye la identidad de una imagen y el segundo falsifica la misma. Los resultados obtenidos por estos dos algoritmos se comparan con otras herramientas diseñadas para el mismo fin, observándose que la solución aquí presentada requiere de menor cantidad y complejidad de datos de entrada. Finalmente, estas técnicas de identificación han sido incluidas en una herramienta para el análisis forense de imágenes digitales de dispositivos móviles llamada Theia. Entre las diferentes ramas del análisis forense, Theia se centra principalmente en la identificación confiable de la marca y el modelo de la cámara móvil que generó una imagen dada. Todos los algoritmos desarrollados han sido implementados e integrados en Theia, reforzando así su funcionalidad.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu
    corecore