19 research outputs found

    Passive Techniques for Detecting and Locating Manipulations in Digital Images

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 19-11-2020El numero de camaras digitales integradas en dispositivos moviles as como su uso en la vida cotidiana esta en continuo crecimiento. Diariamente gran cantidad de imagenes digitales, generadas o no por este tipo de dispositivos, circulan en Internet o son utilizadas como evidencias o pruebas en procesos judiciales. Como consecuencia, el analisis forense de imagenes digitales cobra importancia en multitud de situaciones de la vida real. El analisis forense de imagenes digitales se divide en dos grandes ramas: autenticidad de imagenes digitales e identificacion de la fuente de adquisicion de una imagen. La primera trata de discernir si una imagen ha sufrido algun procesamiento posterior al de su creacion, es decir, que no haya sido manipulada. La segunda pretende identificar el dispositivo que genero la imagen digital. La verificacion de la autenticidad de imagenes digitales se puedellevar a cabo mediante tecnicas activas y tecnicas pasivas de analisis forense. Las tecnicas activas se fundamentan en que las imagenes digitales cuentan con \marcas" presentes desde su creacion, de forma que cualquier tipo de alteracion que se realice con posterioridad a su generacion, modificara las mismas, y, por tanto, permitiran detectar si ha existido un posible post-proceso o manipulacion...The number of digital cameras integrated into mobile devices as well as their use in everyday life is continuously growing. Every day a large number of digital images, whether generated by this type of device or not, circulate on the Internet or are used as evidence in legal proceedings. Consequently, the forensic analysis of digital images becomes important in many real-life situations. Forensic analysis of digital images is divided into two main branches: authenticity of digital images and identi cation of the source of acquisition of an image. The first attempts to discern whether an image has undergone any processing subsequent to its creation, i.e. that it has not been manipulated. The second aims to identify the device that generated the digital image. Verification of the authenticity of digital images can be carried out using both active and passive forensic analysis techniques. The active techniques are based on the fact that the digital images have "marks"present since their creation so that any type of alteration made after their generation will modify them, and therefore will allow detection if there has been any possible post-processing or manipulation. On the other hand, passive techniques perform the analysis of authenticity by extracting characteristics from the image...Fac. de InformáticaTRUEunpu

    Digital image forensics via meta-learning and few-shot learning

    Get PDF
    Digital images are a substantial portion of the information conveyed by social media, the Internet, and television in our daily life. In recent years, digital images have become not only one of the public information carriers, but also a crucial piece of evidence. The widespread availability of low-cost, user-friendly, and potent image editing software and mobile phone applications facilitates altering images without professional expertise. Consequently, safeguarding the originality and integrity of digital images has become a difficulty. Forgers commonly use digital image manipulation to transmit misleading information. Digital image forensics investigates the irregular patterns that might result from image alteration. It is crucial to information security. Over the past several years, machine learning techniques have been effectively used to identify image forgeries. Convolutional Neural Networks(CNN) are a frequent machine learning approach. A standard CNN model could distinguish between original and manipulated images. In this dissertation, two CNN models are introduced to recognize seam carving and Gaussian filtering. Training a conventional CNN model for a new similar image forgery detection task, one must start from scratch. Additionally, many types of tampered image data are challenging to acquire or simulate. Meta-learning is an alternative learning paradigm in which a machine learning model gets experience across numerous related tasks and uses this expertise to improve its future learning performance. Few-shot learning is a method for acquiring knowledge from few data. It can classify images with as few as one or two examples per class. Inspired by meta-learning and few-shot learning, this dissertation proposed a prototypical networks model capable of resolving a collection of related image forgery detection problems. Unlike traditional CNN models, the proposed prototypical networks model does not need to be trained from scratch for a new task. Additionally, it drastically decreases the quantity of training images

    Advancements in multi-view processing for reconstruction, registration and visualization.

    Get PDF
    The ever-increasing diffusion of digital cameras and the advancements in computer vision, image processing and storage capabilities have lead, in the latest years, to the wide diffusion of digital image collections. A set of digital images is usually referred as a multi-view images set when the pictures cover different views of the same physical object or location. In multi-view datasets, correlations between images are exploited in many different ways to increase our capability to gather enhanced understanding and information on a scene. For example, a collection can be enhanced leveraging on the camera position and orientation, or with information about the 3D structure of the scene. The range of applications of multi-view data is really wide, encompassing diverse fields such as image-based reconstruction, image-based localization, navigation of virtual environments, collective photographic retouching, computational photography, object recognition, etc. For all these reasons, the development of new algorithms to effectively create, process, and visualize this type of data is an active research trend. The thesis will present four different advancements related to different aspects of the multi-view data processing: - Image-based 3D reconstruction: we present a pre-processing algorithm, that is a special color-to-gray conversion. This was developed with the aim to improve the accuracy of image-based reconstruction algorithms. In particular, we show how different dense stereo matching results can be enhanced by application of a domain separation approach that pre-computes a single optimized numerical value for each image location. - Image-based appearance reconstruction: we present a multi-view processing algorithm, this can enhance the quality of the color transfer from multi-view images to a geo-referenced 3D model of a location of interest. The proposed approach computes virtual shadows and allows to automatically segment shadowed regions from the input images preventing to use those pixels in subsequent texture synthesis. - 2D to 3D registration: we present an unsupervised localization and registration system. This system can recognize a site that has been framed in a multi-view data and calibrate it on a pre-existing 3D representation. The system has a very high accuracy and it can validate the result in a completely unsupervised manner. The system accuracy is enough to seamlessly view input images correctly super-imposed on the 3D location of interest. - Visualization: we present PhotoCloud, a real-time client-server system for interactive exploration of high resolution 3D models and up to several thousand photographs aligned over this 3D data. PhotoCloud supports any 3D models that can be rendered in a depth-coherent way and arbitrary multi-view image collections. Moreover, it tolerates 2D-to-2D and 2D-to-3D misalignments, and it provides scalable visualization of generic integrated 2D and 3D datasets by exploiting data duality. A set of effective 3D navigation controls, tightly integrated with innovative thumbnail bars, enhances the user navigation. These advancements have been developed in tourism and cultural heritage application contexts, but they are not limited to these

    Optical Cryoimaging of Tissue Metabolism in Renal Injuries: Rodent Model

    Get PDF
    Injured tissues are often accompanied by morphological or biochemical changes that can be detected optically. Therefore, it would be valuable to visualize the changes in both structure and biochemistry responses of organs for early detection of disease and monitoring of its progression. Oxidative stress is a biochemical byproduct of these diseases. Thus, obtaining sensitive and specific measurements of oxidative stress at the cellular level would provide vital information for understanding the pathogenesis of a disease. The objective of this research was to use a fluorescence optical imaging technique in order to evaluate the cellular redox state in kidney tissues, and develop an instrument to acquire high resolution 3D images of tissue. I have improved upon a custom-designed device called a cryoimager to acquire autofluorescent mitochondrial metabolic coenzyme (NADH, FAD) signals. The ratio of these fluorophores, referred to as the mitochondrial redox ratio (RR = NADH/ FAD), can be used as a quantitative metabolic marker of tissue. The improvement to the instrument includes addition of higher resolution imaging capabilities to the system. This improvement in the resolution of image acquisition enables microscopy imaging in cryo temperatures to obtain high resolution 3D images. The imaging is performed in cryogenic temperatures to increase the quantum yield of the fluorophores for a higher signal to noise ratio. I also implemented an automated tissue boundary detection algorithm. The algorithm will help provide more accurate results by removing the background of low contrast images. I examined the redox states of kidneys from genetically modified salt sensitive rats (SSBN13, SSp67phx -/-, and SSNox4-/-), in order to study the contribution of chromosome 13, the p67phox gene and the Nox4 gene in the development of salt sensitive hypertension. The result showed that the genetically manipulated rats are more resistant to hypertension caused by excess dietary salt ,, in comparison with salt sensitive (SS) rats. I also studied how endoglin genes affected the redox state and vascular networks of mice kidneys using high resolution images. The results showed that the next generation of the cryoimager can simultaneously monitor the structural changes and physiological state of tissue to quantify the effect of injuries. In conclusion, the combination of high resolution optical instrumentation and image processing tools provides quantitative physiological and structural information of diseased tissue due to oxidative stress

    Remote Sensing and Geosciences for Archaeology

    Get PDF
    This book collects more than 20 papers, written by renowned experts and scientists from across the globe, that showcase the state-of-the-art and forefront research in archaeological remote sensing and the use of geoscientific techniques to investigate archaeological records and cultural heritage. Very high resolution satellite images from optical and radar space-borne sensors, airborne multi-spectral images, ground penetrating radar, terrestrial laser scanning, 3D modelling, Geographyc Information Systems (GIS) are among the techniques used in the archaeological studies published in this book. The reader can learn how to use these instruments and sensors, also in combination, to investigate cultural landscapes, discover new sites, reconstruct paleo-landscapes, augment the knowledge of monuments, and assess the condition of heritage at risk. Case studies scattered across Europe, Asia and America are presented: from the World UNESCO World Heritage Site of Lines and Geoglyphs of Nasca and Palpa to heritage under threat in the Middle East and North Africa, from coastal heritage in the intertidal flats of the German North Sea to Early and Neolithic settlements in Thessaly. Beginners will learn robust research methodologies and take inspiration; mature scholars will for sure derive inputs for new research and applications

    Pertanika Journal of Science & Technology

    Get PDF

    Photo response non-uniformity based image forensics in the presence of challenging factors

    Get PDF
    With the ever-increasing prevalence of digital imaging devices and the rapid development of networks, the sharing of digital images becomes ubiquitous in our daily life. However, the pervasiveness of powerful image-editing tools also makes the digital images an easy target for malicious manipulations. Thus, to prevent people from falling victims to fake information and trace the criminal activities, digital image forensics methods like source camera identification, source oriented image clustering and image forgery detections have been developed. Photo response non-uniformity (PRNU), which is an intrinsic sensor noise arises due to the pixels non-uniform response to the incident, has been used as a powerful tool for image device fingerprinting. The forensic community has developed a vast number of PRNU-based methods in different fields of digital image forensics. However, with the technology advancement in digital photography, the emergence of photo-sharing social networking sites, as well as the anti-forensics attacks targeting the PRNU, it brings new challenges to PRNU-based image forensics. For example, the performance of the existing forensic methods may deteriorate due to different camera exposure parameter settings and the efficacy of the PRNU-based methods can be directly challenged by image editing tools from social network sites or anti-forensics attacks. The objective of this thesis is to investigate and design effective methods to mitigate some of these challenges on PRNU-based image forensics. We found that the camera exposure parameter settings, especially the camera sensitivity, which is commonly known by the name of the ISO speed, can influence the PRNU-based image forgery detection. Hence, we first construct the Warwick Image Forensics Dataset, which contains images taken with diverse exposure parameter settings to facilitate further studies. To address the impact from ISO speed on PRNU-based image forgery detection, an ISO speed-specific correlation prediction process is proposed with a content-based ISO speed inference method to facilitate the process even if the ISO speed information is not available. We also propose a three-step framework to allow the PRNUbased source oriented clustering methods to perform successfully on Instagram images, despite some built-in image filters from Instagram may significantly distort PRNU. Additionally, for the binary classification of detecting whether an image's PRNU is attacked or not, we propose a generative adversarial network-based training strategy for a neural network-based classifier, which makes the classifier generalize better for images subject to unprecedented attacks. The proposed methods are evaluated on public benchmarking datasets and our Warwick Image Forensics Dataset, which is released to the public as well. The experimental results validate the effectiveness of the methods proposed in this thesis

    3D high resolution techniques applied on small and medium size objects: from the analysis of the process towards quality assessment

    Get PDF
    The need for metric data acquisition is an issue strictly related to the human capability of describing the world with rigorous and repeatable methods. From the invention of photography to the development of advanced computers, the metric data acquisition has been subjected to rapid mutation, and nowadays there exists a strict connection between metric data acquisition and image processing, Computer Vision and Artificial Intelligence. The sensor devices for the 3D model generation are various and characterized by different functioning principles. In this work, optical passive and active sensors are treated, focusing specifically on close-range photogrammetry, Time of Flight (ToF) sensors and Structured-light scanners (SLS). Starting from the functioning principles of the techniques and showing some issues related to them, the work highlights their potentialities, analyzing the fundamental and most critical steps of the process leading to the quality assessment of the data. Central themes are the instruments calibration, the acquisition plan and the interpretation of the final results. The capability of the acquisition techniques to satisfy unconventional requirements in the field of Cultural Heritage is also shown. The thesis starts with an overview about the history and developments of 3D metric data acquisition. Chapter 1 treats the Human Vision System and presents a complete overview of 3D sensing devices. Chapter 2 starts from the enunciation of the basic principle of close-range photogrammetry considering digital cameras functioning principles, calibration issues, and the process leading to the 3D mesh reconstruction. The case of multi-image acquisition is analyzed, deepening the quality assessment of the photogrammetric process through a case study. Chapter 3 is devoted to the range-based acquisition techniques, namely ToF laser scanners and SLSs. Lastly, Chapter 4 focuses on unconventional applications of the mentioned high-resolution acquisition techniques showing some examples of study cases in the field of Cultural Heritage

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation
    corecore