238 research outputs found
General GAN-generated image detection by data augmentation in fingerprint domain
In this work, we investigate improving the generalizability of GAN-generated
image detectors by performing data augmentation in the fingerprint domain.
Specifically, we first separate the fingerprints and contents of the
GAN-generated images using an autoencoder based GAN fingerprint extractor,
followed by random perturbations of the fingerprints. Then the original
fingerprints are substituted with the perturbed fingerprints and added to the
original contents, to produce images that are visually invariant but with
distinct fingerprints. The perturbed images can successfully imitate images
generated by different GANs to improve the generalization of the detectors,
which is demonstrated by the spectra visualization. To our knowledge, we are
the first to conduct data augmentation in the fingerprint domain. Our work
explores a novel prospect that is distinct from previous works on spatial and
frequency domain augmentation. Extensive cross-GAN experiments demonstrate the
effectiveness of our method compared to the state-of-the-art methods in
detecting fake images generated by unknown GANs
An Analysis on Adversarial Machine Learning: Methods and Applications
Deep learning has witnessed astonishing advancement in the last decade and revolutionized many fields ranging from computer vision to natural language processing. A prominent field of research that enabled such achievements is adversarial learning, investigating the behavior and functionality of a learning model in presence of an adversary. Adversarial learning consists of two major trends. The first trend analyzes the susceptibility of machine learning models to manipulation in the decision-making process and aims to improve the robustness to such manipulations. The second trend exploits adversarial games between components of the model to enhance the learning process. This dissertation aims to provide an analysis on these two sides of adversarial learning and harness their potential for improving the robustness and generalization of deep models.
In the first part of the dissertation, we study the adversarial susceptibility of deep learning models. We provide an empirical analysis on the extent of vulnerability by proposing two adversarial attacks that explore the geometric and frequency-domain characteristics of inputs to manipulate deep decisions. Afterward, we formalize the susceptibility of deep networks using the first-order approximation of the predictions and extend the theory to the ensemble classification scheme. Inspired by theoretical findings, we formalize a reliable and practical defense against adversarial examples to robustify ensembles. We extend this part by investigating the shortcomings of \gls{at} and highlight that the popular momentum stochastic gradient descent, developed essentially for natural training, is not proper for optimization in adversarial training since it is not designed to be robust against the chaotic behavior of gradients in this setup. Motivated by these observations, we develop an optimization method that is more suitable for adversarial training. In the second part of the dissertation, we harness adversarial learning to enhance the generalization and performance of deep networks in discriminative and generative tasks. We develop several models for biometric identification including fingerprint distortion rectification and latent fingerprint reconstruction. In particular, we develop a ridge reconstruction model based on generative adversarial networks that estimates the missing ridge information in latent fingerprints. We introduce a novel modification that enables the generator network to preserve the ID information during the reconstruction process. To address the scarcity of data, {\it e.g.}, in latent fingerprint analysis, we develop a supervised augmentation technique that combines input examples based on their salient regions. Our findings advocate that adversarial learning improves the performance and reliability of deep networks in a wide range of applications
A siamese-based verification system for open-set architecture attribution of synthetic images
Despite the wide variety of methods developed for synthetic image attribution, most of them can only attribute images generated by models or architectures included in the training set and do not work with unknown architectures, hindering their applicability in real -world scenarios. In this paper, we propose a verification framework that relies on a Siamese Network to address the problem of open-set attribution of synthetic images to the architecture that generated them. We consider two different settings. In the first setting, the system determines whether two images have been produced by the same generative architecture or not. In the second setting, the system verifies a claim about the architecture used to generate a synthetic image, utilizing one or multiple reference images generated by the claimed architecture. The main strength of the proposed system is its ability to operate in both closed and open-set scenarios so that the input images, either the query and reference images, can belong to the architectures considered during training or not. Experimental evaluations encompassing various generative architectures such as GANs, diffusion models, and transformers, focusing on synthetic face image generation, confirm the excellent performance of our method in both closed and open-set settings, as well as its strong generalization capabilities
BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution
Synthetic image attribution addresses the problem of tracing back the origin
of images produced by generative models. Extensive efforts have been made to
explore unique representations of generative models and use them to attribute a
synthetic image to the model that produced it. Most of the methods classify the
models or the architectures among those in a closed set without considering the
possibility that the system is fed with samples produced by unknown
architectures. With the continuous progress of AI technology, new generative
architectures continuously appear, thus driving the attention of researchers
towards the development of tools capable of working in open-set scenarios. In
this paper, we propose a framework for open set attribution of synthetic
images, named BOSC (Backdoor-based Open Set Classification), that relies on the
concept of backdoor attacks to design a classifier with rejection option. BOSC
works by purposely injecting class-specific triggers inside a portion of the
images in the training set to induce the network to establish a matching
between class features and trigger features. The behavior of the trained model
with respect to triggered samples is then exploited at test time to perform
sample rejection using an ad-hoc score. Experiments show that the proposed
method has good performance, always surpassing the state-of-the-art. Robustness
against image processing is also very good. Although we designed our method for
the task of synthetic image attribution, the proposed framework is a general
one and can be used for other image forensic applications
Texture and artifact decomposition for improving generalization in deep-learning-based deepfake detection
The harmful utilization of DeepFake technology poses a significant threat to public welfare, precipitating a crisis in public opinion. Existing detection methodologies, predominantly relying on convolutional neural networks and deep learning paradigms, focus on achieving high in-domain recognition accuracy amidst many forgery techniques. However, overseeing the intricate interplay between textures and artifacts results in compromised performance across diverse forgery scenarios. This paper introduces a groundbreaking framework, denoted as Texture and Artifact Detector (TAD), to mitigate the challenge posed by the limited generalization ability stemming from the mutual neglect of textures and artifacts. Specifically, our approach delves into the similarities among disparate forged datasets, discerning synthetic content based on the consistency of textures and the presence of artifacts. Furthermore, we use a model ensemble learning strategy to judiciously aggregate texture disparities and artifact patterns inherent in various forgery types, thereby enabling the model’s generalization ability. Our comprehensive experimental analysis, encompassing extensive intra-dataset and cross-dataset validations along with evaluations on both video sequences and individual frames, confirms the effectiveness of TAD. The results from four benchmark datasets highlight the significant impact of the synergistic consideration of texture and artifact information, leading to a marked improvement in detection capabilities
Deepfakes: current and future trends
Advances in Deep Learning (DL), Big Data and image processing have facilitated online disinformation spreading through Deepfakes. This entails severe threats including public opinion manipulation, geopolitical tensions, chaos in financial markets, scams, defamation and identity theft among others. Therefore, it is imperative to develop techniques to prevent, detect, and stop the spreading of deepfake content. Along these lines, the goal of this paper is to present a big picture perspective of the deepfake paradigm, by reviewing current and future trends. First, a compact summary of DL techniques used for deepfakes is presented. Then, a review of the fight between generation and detection techniques is elaborated. Moreover, we delve into the potential that new technologies, such as distributed ledgers and blockchain, can offer with regard to cybersecurity and the fight against digital deception. Two scenarios of application, including online social networks engineering attacks and Internet of Things, are reviewed where main insights and open challenges are tackled. Finally, future trends and research lines are discussed, pointing out potential key agents and technologies.publishedVersio
Deep Learning in Diverse Intelligent Sensor Based Systems
Deep learning has become a predominant method for solving data analysis problems in virtually all fields of science and engineering. The increasing complexity and the large volume of data collected by diverse sensor systems have spurred the development of deep learning methods and have fundamentally transformed the way the data are acquired, processed, analyzed, and interpreted. With the rapid development of deep learning technology and its ever-increasing range of successful applications across diverse sensor systems, there is an urgent need to provide a comprehensive investigation of deep learning in this domain from a holistic view. This survey paper aims to contribute to this by systematically investigating deep learning models/methods and their applications across diverse sensor systems. It also provides a comprehensive summary of deep learning implementation tips and links to tutorials, open-source codes, and pretrained models, which can serve as an excellent self-contained reference for deep learning practitioners and those seeking to innovate deep learning in this space. In addition, this paper provides insights into research topics in diverse sensor systems where deep learning has not yet been well-developed, and highlights challenges and future opportunities. This survey serves as a catalyst to accelerate the application and transformation of deep learning in diverse sensor systems
- …