251 research outputs found
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples
We investigate if the random feature selection approach proposed in [1] to
improve the robustness of forensic detectors to targeted attacks, can be
extended to detectors based on deep learning features. In particular, we study
the transferability of adversarial examples targeting an original CNN image
manipulation detector to other detectors (a fully connected neural network and
a linear SVM) that rely on a random subset of the features extracted from the
flatten layer of the original network. The results we got by considering three
image manipulation detection tasks (resizing, median filtering and adaptive
histogram equalization), two original network architectures and three classes
of attacks, show that feature randomization helps to hinder attack
transferability, even if, in some cases, simply changing the architecture of
the detector, or even retraining the detector is enough to prevent the
transferability of the attacks.Comment: Submitted to the ICASSP conference to be held in 2020, Barcelona,
Spai
On the Transferability of Adversarial Examples between Encrypted Models
Deep neural networks (DNNs) are well known to be vulnerable to adversarial
examples (AEs). In addition, AEs have adversarial transferability, namely, AEs
generated for a source model fool other (target) models. In this paper, we
investigate the transferability of models encrypted for adversarially robust
defense for the first time. To objectively verify the property of
transferability, the robustness of models is evaluated by using a benchmark
attack method, called AutoAttack. In an image-classification experiment, the
use of encrypted models is confirmed not only to be robust against AEs but to
also reduce the influence of AEs in terms of the transferability of models.Comment: to be appear in ISPACS 202
On the Adversarial Transferability of ConvMixer Models
Deep neural networks (DNNs) are well known to be vulnerable to adversarial
examples (AEs). In addition, AEs have adversarial transferability, which means
AEs generated for a source model can fool another black-box model (target
model) with a non-trivial probability. In this paper, we investigate the
property of adversarial transferability between models including ConvMixer,
which is an isotropic network, for the first time. To objectively verify the
property of transferability, the robustness of models is evaluated by using a
benchmark attack method called AutoAttack. In an image classification
experiment, ConvMixer is confirmed to be weak to adversarial transferability.Comment: 5 pages, 5 figures, 5 tables. arXiv admin note: substantial text
overlap with arXiv:2209.0299
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
- …