14 research outputs found
Resampling Forgery Detection Using Deep Learning and A-Contrario Analysis
The amount of digital imagery recorded has recently grown exponentially, and
with the advancement of software, such as Photoshop or Gimp, it has become
easier to manipulate images. However, most images on the internet have not been
manipulated and any automated manipulation detection algorithm must carefully
control the false alarm rate. In this paper we discuss a method to
automatically detect local resampling using deep learning while controlling the
false alarm rate using a-contrario analysis. The automated procedure consists
of three primary steps. First, resampling features are calculated for image
blocks. A deep learning classifier is then used to generate a heatmap that
indicates if the image block has been resampled. We expect some of these blocks
to be falsely identified as resampled. We use a-contrario hypothesis testing to
both identify if the patterns of the manipulated blocks indicate if the image
has been tampered with and to localize the manipulation. We demonstrate that
this strategy is effective in indicating if an image has been manipulated and
localizing the manipulations.Comment: arXiv admin note: text overlap with arXiv:1802.0315
Evaluación y comparación de un grupo de técnicas para la identificación de imágenes digitales alteradas por remuestreo
La falsificación de imágenes digitales, en la actualidad, es una tarea que se realiza con mucha facilidad debido al avance de la tecnológica y a la aparición de softwares como Photoshop y Corel Draw. Es común encontrar imágenes con alteraciones en el mundo de la moda y la farándula pero cuando se trata de procesos legales o noticias puede traer consecuencias negativas a los implicados. Existen diferentes métodos de falsificación de imágenes, uno de ellos es el remuestreo o resampling, el cual consiste en copiar y pegar una porción de la imagen en si misma y realizar una transformación geométrica como la rotación y/o escalado. El resampling introduce nuevos píxeles a partir de los existentes, por medio de una interpolación, este proceso crea una correlación con una forma especifica entre los píxeles y altera la estadística de la imagen, lo cual no ocurre de forma natural [1]. Una imagen puede contener remuestreo pero no necesariamente contiene una alteración maliciosa, ya que el simple hecho de ampliar o cambiar de tamaño la imagen introduce correlaciones. Las técnicas para detectar el remuestreo buscan la correlación entre píxeles y/o alteraciones en la estadística de la imagen. Los métodos consisten en un preprocesamiento, extracción de características y clasificación. En el preprocesamiento se encuentran cambios de escala para reducir la dimencionalidad de la imagen y el tiempo de computo
Passive Techniques for Detecting and Locating Manipulations in Digital Images
Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 19-11-2020El numero de camaras digitales integradas en dispositivos moviles as como su uso en la vida cotidiana esta en continuo crecimiento. Diariamente gran cantidad de imagenes digitales, generadas o no por este tipo de dispositivos, circulan en Internet o son utilizadas como evidencias o pruebas en procesos judiciales. Como consecuencia, el analisis forense de imagenes digitales cobra importancia en multitud de situaciones de la vida real. El analisis forense de imagenes digitales se divide en dos grandes ramas: autenticidad de imagenes digitales e identificacion de la fuente de adquisicion de una imagen. La primera trata de discernir si una imagen ha sufrido algun procesamiento posterior al de su creacion, es decir, que no haya sido manipulada. La segunda pretende identificar el dispositivo que genero la imagen digital. La verificacion de la autenticidad de imagenes digitales se puedellevar a cabo mediante tecnicas activas y tecnicas pasivas de analisis forense. Las tecnicas activas se fundamentan en que las imagenes digitales cuentan con \marcas" presentes desde su creacion, de forma que cualquier tipo de alteracion que se realice con posterioridad a su generacion, modificara las mismas, y, por tanto, permitiran detectar si ha existido un posible post-proceso o manipulacion...The number of digital cameras integrated into mobile devices as well as their use in everyday life is continuously growing. Every day a large number of digital images, whether generated by this type of device or not, circulate on the Internet or are used as evidence in legal proceedings. Consequently, the forensic analysis of digital images becomes important in many real-life situations. Forensic analysis of digital images is divided into two main branches: authenticity of digital images and identi cation of the source of acquisition of an image. The first attempts to discern whether an image has undergone any processing subsequent to its creation, i.e. that it has not been manipulated. The second aims to identify the device that generated the digital image. Verification of the authenticity of digital images can be carried out using both active and passive forensic analysis techniques. The active techniques are based on the fact that the digital images have "marks"present since their creation so that any type of alteration made after their generation will modify them, and therefore will allow detection if there has been any possible post-processing or manipulation. On the other hand, passive techniques perform the analysis of authenticity by extracting characteristics from the image...Fac. de InformáticaTRUEunpu
Verificaciónn de firma y gráficos manuscritos: Características discriminantes y nuevos escenarios de aplicación biométrica
Tesis doctoral inédita leída en la Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: Febrero 2015The proliferation of handheld devices such as smartphones and tablets brings a new
scenario for biometric authentication, and in particular to automatic signature verification.
Research on signature verification has been traditionally carried out using signatures acquired
on digitizing tablets or Tablet-PCs.
This PhD Thesis addresses the problem of user authentication on handled devices using
handwritten signatures and graphical passwords based on free-form doodles, as well as the effects
of biometric aging on signatures. The Thesis pretends to analyze: (i) which are the effects
of mobile conditions on signature and doodle verification, (ii) which are the most distinctive
features in mobile conditions, extracted from the pen or fingertip trajectory, (iii) how do different
similarity computation (i.e. matching) algorithms behave with signatures and graphical
passwords captured on mobile conditions, and (iv) what is the impact of aging on signature
features and verification performance.
Two novel datasets have been presented in this Thesis. A database containing free-form
graphical passwords drawn with the fingertip on a smartphone is described. It is the first publicly
available graphical password database to the extent of our knowledge. A dataset containing
signatures from users captured over a period 15 months is also presented, aimed towards the
study of biometric aging.
State-of-the-art local and global matching algorithms are used, namely Hidden Markov Models,
Gaussian Mixture Models, Dynamic Time Warping and distance-based classifiers. A large
proportion of features presented in the research literature is considered in this Thesis.
The experimental contribution of this Thesis is divided in three main topics: signature verification
on handheld devices, the effects of aging on signature verification, and free-form graphical
password-based authentication. First, regarding signature verification in mobile conditions, we
use a database captured both on a handheld device and digitizing tablet in an office-like scenario.
We analyze the discriminative power of both global and local features using discriminant analysis
and feature selection techniques. The effects of the lack of pen-up trajectories on handheld
devices (when the stylus tip is not in contact with the screen) are also studied.
We then analyze the effects of biometric aging on the signature trait. Using three different
matching algorithms, Hidden Markov Models (HMM), Dynamic Time Warping (DTW), and
distance-based classifiers, the impact in verification performance is studied. We also study
the effects of aging on individual users and individual signature features. Template update
techniques are analyzed as a way of mitigating the negative impact of aging.
Regarding graphical passwords, the DooDB graphical password database is first presented.
A statistical analysis is performed comparing the database samples (free-form doodles and simplified
signatures) with handwritten signatures. The sample variability (inter-user, intra-user
and inter-session) is also analyzed, as well as the learning curve for each kind of trait. Benchmark
results are also reported using state of the art classifiers.
Graphical password verification is afterwards studied using features and matching algorithms
from the signature verification state of the art. Feature selection is also performed and the
resulting feature sets are analyzed.
The main contributions of this work can be summarized as follows. A thorough analysis of
individual feature performance has been carried out, both for global and local features and on
signatures acquired using pen tablets and handheld devices. We have found which individual
features are the most robust and which have very low discriminative potential (pen inclination
and pressure among others). It has been found that feature selection increases verification
performance dramatically, from example from ERRs (Equal Error Rates) over 30% using all
available local features, in the case of handheld devices and skilled forgeries, to rates below 20%
after feature selection. We study the impact of the lack of trajectory information when the pen
tip is not in contact with the acquisition device surface (which happens when touchscreens are
used for signature acquisitions), and we have found that the lack of pen-up trajectories negatively
affects verification performance. As an example, the EER for the local system increases from
9.3% to 12.1% against skilled forgeries when pen-up trajectories are not available.
We study the effects of biometric aging on signature verification and study a number of ways
to compensate the observed performance degradation. It is found that aging does not affect
equally all the users in the database and that features related to signature dynamics are more
degraded than static features. Comparing the performance using test signatures from the first
months with the last months, a variable effect of aging on the EER against random forgeries is
observed in the three systems that are evaluated, from 0.0% to 0.5% in the DTW system, from
1.0% to 5.0% in the distance-based system using global features, and from 3.2% to 27.8% in the
HMM system.
A new graphical password database has been acquired and made publicly available. Verification
algorithms for finger-drawn graphical passwords and simplified signatures are compared
and feature analysis is performed. We have found that inter-session variability has a highly
negative impact on verification performance, but this can be mitigated performing feature selection
and applying fusion of different matchers. It has also been found that some feature types
are prevalent in the optimal feature vectors and that classifiers have a very different behavior
against skilled and random forgeries. An EER of 3.4% and 22.1% against random and skilled
forgeries is obtained for free-form doodles, which is a promising performance
A Statistical Approach to the Alignment of fMRI Data
Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods
A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium
When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available