138 research outputs found

    An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images

    Get PDF
    Due to the reduction of technological costs and the increase of satellites launches, satellite images are becoming more popular and easier to obtain. Besides serving benevolent purposes, satellite data can also be used for malicious reasons such as misinformation. As a matter of fact, satellite images can be easily manipulated relying on general image editing tools. Moreover, with the surge of Deep Neural Networks (DNNs) that can generate realistic synthetic imagery belonging to various domains, additional threats related to the diffusion of synthetically generated satellite images are emerging. In this paper, we review the State of the Art (SOTA) on the generation and manipulation of satellite images. In particular, we focus on both the generation of synthetic satellite imagery from scratch, and the semantic manipulation of satellite images by means of image-transfer technologies, including the transformation of images obtained from one type of sensor to another one. We also describe forensic detection techniques that have been researched so far to classify and detect synthetic image forgeries. While we focus mostly on forensic techniques explicitly tailored to the detection of AI-generated synthetic contents, we also review some methods designed for general splicing detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202

    An Evaluation of Popular Copy-Move Forgery Detection Approaches

    Full text link
    A copy-move forgery is created by copying and pasting content within the same image, and potentially post-processing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper appeared in IEEE Transaction on Information Forensics and Securit

    A novel semi-fragile forensic watermarking scheme for remote sensing images

    Get PDF
    Peer-reviewedA semi-fragile watermarking scheme for multiple band images is presented. We propose to embed a mark into remote sensing images applying a tree structured vector quantization approach to the pixel signatures, instead of processing each band separately. The signature of themmultispectral or hyperspectral image is used to embed the mark in it order to detect any significant modification of the original image. The image is segmented into threedimensional blocks and a tree structured vector quantizer is built for each block. These trees are manipulated using an iterative algorithm until the resulting block satisfies a required criterion which establishes the embedded mark. The method is shown to be able to preserve the mark under lossy compression (above a given threshold) but, at the same time, it detects possibly forged blocks and their position in the whole image.Se presenta un esquema de marcas de agua semi-frágiles para múltiples imágenes de banda. Proponemos incorporar una marca en imágenes de detección remota, aplicando un enfoque de cuantización del vector de árbol estructurado con las definiciones de píxel, en lugar de procesar cada banda por separado. La firma de la imagen hiperespectral se utiliza para insertar la marca en el mismo orden para detectar cualquier modificación significativa de la imagen original. La imagen es segmentada en bloques tridimensionales y un cuantificador de vector de estructura de árbol se construye para cada bloque. Estos árboles son manipulados utilizando un algoritmo iteractivo hasta que el bloque resultante satisface un criterio necesario que establece la marca incrustada. El método se muestra para poder preservar la marca bajo compresión con pérdida (por encima de un umbral establecido) pero, al mismo tiempo, detecta posiblemente bloques forjados y su posición en la imagen entera.Es presenta un esquema de marques d'aigua semi-fràgils per a múltiples imatges de banda. Proposem incorporar una marca en imatges de detecció remota, aplicant un enfocament de quantització del vector d'arbre estructurat amb les definicions de píxel, en lloc de processar cada banda per separat. La signatura de la imatge hiperespectral s'utilitza per inserir la marca en el mateix ordre per detectar qualsevol modificació significativa de la imatge original. La imatge és segmentada en blocs tridimensionals i un quantificador de vector d'estructura d'arbre es construeix per a cada bloc. Aquests arbres són manipulats utilitzant un algoritme iteractiu fins que el bloc resultant satisfà un criteri necessari que estableix la marca incrustada. El mètode es mostra per poder preservar la marca sota compressió amb pèrdua (per sobre d'un llindar establert) però, al mateix temps, detecta possiblement blocs forjats i la seva posició en la imatge sencera

    Image Evolution Analysis Through Forensic Techniques

    Get PDF

    Post-processing of vis, nir, and swir multispectral images of paintings. New discovery on the the drunkenness of noah, painted by andrea sacchi, stored at palazzo chigi (ariccia, rome)

    Get PDF
    IR Reflectography applied to the identification of hidden details of paintings is extremely useful for authentication purposes and for revealing technical hidden features. Recently, multispectral imaging has replaced traditional imaging techniques thanks to the possibility to select specific spectral ranges bringing out interesting details of the paintings. VIS–NIR–SWIR images of one of the The Drunkenness of Noah versions painted by Andrea Sacchi, acquired with a modified reflex and InGaAs cameras, are presented in this research. Starting from multispectral images we performed post-processing analysis, using visible and infrared false-color images and principal component analysis (PCA) in order to highlight pentimenti and underdrawings. Radiography was performed in some areas to better investigate the inner pictorial layers. This study represents the first published scientific investigation of The Drunkenness of Noah’s artistic production, painted by Andrea Sacchi

    Multispectral imaging and analysis of the Archimedes Palimpsest

    Get PDF
    The Archimedes Palimpsest is a manuscript that has been preserved for approximately 1,000 years. Among its pages are some of the few known sources of treatises from the Greek mathematician Archimedes. The writing has been overwritten with prayer text, called the Euchologion, and portions of the faded Archimedes text are difficult to read. This research investigates methods to detect the presence of ink in the Archimedes Palimpsest using state-of-the-art image processing techniques applied to data from X-ray fluorescence (XRF) scans. In an effort to extract more legible text, various methods of imaging have been applied to the Archimedes manuscript. Recent X-ray fluorescence images of the palimpsest suggest the possibility of detecting individual text layers and isolating them from each other. This is encouraging, since many of the pages have also been partially masked by gold-leafed, Byzantine-style artwork, making the Archimedes writing difficult to see with the human eye. The scans measure the X radiation emitted by atoms on the pages that have been excited by other higher energy X rays incident to the parchment. This caused certain elements within the manuscript, such as the iron in the ink, to fluoresce at energies that are specific to the particular material. A total of 2,000 different energy levels, or bands, were recorded. To evaluate the data contained in this large number of bands, a single data set was created that included all bands, referred to as a datacube, which shows the transition of each pixel through the spectrum. Special image processing tools, developed for use in the field of remote sensing to process aerial and satellite data, can be used to detect certain patterns within the datacube. Each tool is then used to segregate the noise from the relevant data in the datacube. The datacube for this thesis research was created from a small portion of one page of the Archimedes Palimpsest, and may inherently be subject to certain noise limitations. This study focuses on two main objectives: Evaluation of X-ray fluorescence data to determine which energy levels contain useful information about the layers of text. Creation of a pseudocolored composite RGB image of a portion of enhanced Archimedes text, similar to previous pseudocolored MSI images. Results from this study show that only a few regions within the datacube contain information relevant to the layers of text. Certain algorithms, such as principal component analysis and minimum noise fraction, showed distinct information about trace elements fluorescing in the ink and parchment. Meaningful data near the spectral line of each trace element was detected after disbanding the datacube into smaller regions. Enough information was obtained as a result to create colorized RGB composite images that enhance the contrast of the Archimedes writing relative to the overwritten text. It is hoped that this research can improve the method for identifying useful bands of information within datacubes. The research may also have created a repeatable method for detecting useful bands of information in similar datacubes. State-of-the-art multispectral imaging applications were specifically applied to detect, extract, and enhance previously illegible writings that are of interest to scholars and museums in particular

    Biometric Spoofing: A JRC Case Study in 3D Face Recognition

    Get PDF
    Based on newly available and affordable off-the-shelf 3D sensing, processing and printing technologies, the JRC has conducted a comprehensive study on the feasibility of spoofing 3D and 2.5D face recognition systems with low-cost self-manufactured models and presents in this report a systematic and rigorous evaluation of the real risk posed by such attacking approach which has been complemented by a test campaign. The work accomplished and presented in this report, covers theories, methodologies, state of the art techniques, evaluation databases and also aims at providing an outlook into the future of this extremely active field of research.JRC.G.6-Digital Citizen Securit

    Vulnerability assessment in the use of biometrics in unsupervised environments

    Get PDF
    Mención Internacional en el título de doctorIn the last few decades, we have witnessed a large-scale deployment of biometric systems in different life applications replacing the traditional recognition methods such as passwords and tokens. We approached a time where we use biometric systems in our daily life. On a personal scale, the authentication to our electronic devices (smartphones, tablets, laptops, etc.) utilizes biometric characteristics to provide access permission. Moreover, we access our bank accounts, perform various types of payments and transactions using the biometric sensors integrated into our devices. On the other hand, different organizations, companies, and institutions use biometric-based solutions for access control. On the national scale, police authorities and border control measures use biometric recognition devices for individual identification and verification purposes. Therefore, biometric systems are relied upon to provide a secured recognition where only the genuine user can be recognized as being himself. Moreover, the biometric system should ensure that an individual cannot be identified as someone else. In the literature, there are a surprising number of experiments that show the possibility of stealing someone’s biometric characteristics and use it to create an artificial biometric trait that can be used by an attacker to claim the identity of the genuine user. There were also real cases of people who successfully fooled the biometric recognition system in airports and smartphones [1]–[3]. That urges the necessity to investigate the potential threats and propose countermeasures that ensure high levels of security and user convenience. Consequently, performing security evaluations is vital to identify: (1) the security flaws in biometric systems, (2) the possible threats that may target the defined flaws, and (3) measurements that describe the technical competence of the biometric system security. Identifying the system vulnerabilities leads to proposing adequate security solutions that assist in achieving higher integrity. This thesis aims to investigate the vulnerability of fingerprint modality to presentation attacks in unsupervised environments, then implement mechanisms to detect those attacks and avoid the misuse of the system. To achieve these objectives, the thesis is carried out in the following three phases. In the first phase, the generic biometric system scheme is studied by analyzing the vulnerable points with special attention to the vulnerability to presentation attacks. The study reviews the literature in presentation attack and the corresponding solutions, i.e. presentation attack detection mechanisms, for six biometric modalities: fingerprint, face, iris, vascular, handwritten signature, and voice. Moreover, it provides a new taxonomy for presentation attack detection mechanisms. The proposed taxonomy helps to comprehend the issue of presentation attacks and how the literature tried to address it. The taxonomy represents a starting point to initialize new investigations that propose novel presentation attack detection mechanisms. In the second phase, an evaluation methodology is developed from two sources: (1) the ISO/IEC 30107 standard, and (2) the Common Evaluation Methodology by the Common Criteria. The developed methodology characterizes two main aspects of the presentation attack detection mechanism: (1) the resistance of the mechanism to presentation attacks, and (2) the corresponding threat of the studied attack. The first part is conducted by showing the mechanism's technical capabilities and how it influences the security and ease-of-use of the biometric system. The second part is done by performing a vulnerability assessment considering all the factors that affect the attack potential. Finally, a data collection is carried out, including 7128 fingerprint videos of bona fide and attack presentation. The data is collected using two sensing technologies, two presentation scenarios, and considering seven attack species. The database is used to develop dynamic presentation attack detection mechanisms that exploit the fingerprint spatio-temporal features. In the final phase, a set of novel presentation attack detection mechanisms is developed exploiting the dynamic features caused by the natural fingerprint phenomena such as perspiration and elasticity. The evaluation results show an efficient capability to detect attacks where, in some configurations, the mechanisms are capable of eliminating some attack species and mitigating the rest of the species while keeping the user convenience at a high level.En las últimas décadas, hemos asistido a un despliegue a gran escala de los sistemas biométricos en diferentes aplicaciones de la vida cotidiana, sustituyendo a los métodos de reconocimiento tradicionales, como las contraseñas y los tokens. Actualmente los sistemas biométricos ya forman parte de nuestra vida cotidiana: es habitual emplear estos sistemas para que nos proporcionen acceso a nuestros dispositivos electrónicos (teléfonos inteligentes, tabletas, ordenadores portátiles, etc.) usando nuestras características biométricas. Además, accedemos a nuestras cuentas bancarias, realizamos diversos tipos de pagos y transacciones utilizando los sensores biométricos integrados en nuestros dispositivos. Por otra parte, diferentes organizaciones, empresas e instituciones utilizan soluciones basadas en la biometría para el control de acceso. A escala nacional, las autoridades policiales y de control fronterizo utilizan dispositivos de reconocimiento biométrico con fines de identificación y verificación individual. Por lo tanto, en todas estas aplicaciones se confía en que los sistemas biométricos proporcionen un reconocimiento seguro en el que solo el usuario genuino pueda ser reconocido como tal. Además, el sistema biométrico debe garantizar que un individuo no pueda ser identificado como otra persona. En el estado del arte, hay un número sorprendente de experimentos que muestran la posibilidad de robar las características biométricas de alguien, y utilizarlas para crear un rasgo biométrico artificial que puede ser utilizado por un atacante con el fin de reclamar la identidad del usuario genuino. También se han dado casos reales de personas que lograron engañar al sistema de reconocimiento biométrico en aeropuertos y teléfonos inteligentes [1]–[3]. Esto hace que sea necesario investigar estas posibles amenazas y proponer contramedidas que garanticen altos niveles de seguridad y comodidad para el usuario. En consecuencia, es vital la realización de evaluaciones de seguridad para identificar (1) los fallos de seguridad de los sistemas biométricos, (2) las posibles amenazas que pueden explotar estos fallos, y (3) las medidas que aumentan la seguridad del sistema biométrico reduciendo estas amenazas. La identificación de las vulnerabilidades del sistema lleva a proponer soluciones de seguridad adecuadas que ayuden a conseguir una mayor integridad. Esta tesis tiene como objetivo investigar la vulnerabilidad en los sistemas de modalidad de huella dactilar a los ataques de presentación en entornos no supervisados, para luego implementar mecanismos que permitan detectar dichos ataques y evitar el mal uso del sistema. Para lograr estos objetivos, la tesis se desarrolla en las siguientes tres fases. En la primera fase, se estudia el esquema del sistema biométrico genérico analizando sus puntos vulnerables con especial atención a los ataques de presentación. El estudio revisa la literatura sobre ataques de presentación y las soluciones correspondientes, es decir, los mecanismos de detección de ataques de presentación, para seis modalidades biométricas: huella dactilar, rostro, iris, vascular, firma manuscrita y voz. Además, se proporciona una nueva taxonomía para los mecanismos de detección de ataques de presentación. La taxonomía propuesta ayuda a comprender el problema de los ataques de presentación y la forma en que la literatura ha tratado de abordarlo. Esta taxonomía presenta un punto de partida para iniciar nuevas investigaciones que propongan novedosos mecanismos de detección de ataques de presentación. En la segunda fase, se desarrolla una metodología de evaluación a partir de dos fuentes: (1) la norma ISO/IEC 30107, y (2) Common Evaluation Methodology por el Common Criteria. La metodología desarrollada considera dos aspectos importantes del mecanismo de detección de ataques de presentación (1) la resistencia del mecanismo a los ataques de presentación, y (2) la correspondiente amenaza del ataque estudiado. Para el primer punto, se han de señalar las capacidades técnicas del mecanismo y cómo influyen en la seguridad y la facilidad de uso del sistema biométrico. Para el segundo aspecto se debe llevar a cabo una evaluación de la vulnerabilidad, teniendo en cuenta todos los factores que afectan al potencial de ataque. Por último, siguiendo esta metodología, se lleva a cabo una recogida de datos que incluye 7128 vídeos de huellas dactilares genuinas y de presentación de ataques. Los datos se recogen utilizando dos tecnologías de sensor, dos escenarios de presentación y considerando siete tipos de instrumentos de ataque. La base de datos se utiliza para desarrollar y evaluar mecanismos dinámicos de detección de ataques de presentación que explotan las características espacio-temporales de las huellas dactilares. En la fase final, se desarrolla un conjunto de mecanismos novedosos de detección de ataques de presentación que explotan las características dinámicas causadas por los fenómenos naturales de las huellas dactilares, como la transpiración y la elasticidad. Los resultados de la evaluación muestran una capacidad eficiente de detección de ataques en la que, en algunas configuraciones, los mecanismos son capaces de eliminar completamente algunos tipos de instrumentos de ataque y mitigar el resto de los tipos manteniendo la comodidad del usuario en un nivel alto.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Cristina Conde Vila.- Secretario: Mariano López García.- Vocal: Farzin Derav
    corecore