3,415 research outputs found
Locate and Verify: A Two-Stream Network for Improved Deepfake Detection
Deepfake has taken the world by storm, triggering a trust crisis. Current
deepfake detection methods are typically inadequate in generalizability, with a
tendency to overfit to image contents such as the background, which are
frequently occurring but relatively unimportant in the training dataset.
Furthermore, current methods heavily rely on a few dominant forgery regions and
may ignore other equally important regions, leading to inadequate uncovering of
forgery cues. In this paper, we strive to address these shortcomings from three
aspects: (1) We propose an innovative two-stream network that effectively
enlarges the potential regions from which the model extracts forgery evidence.
(2) We devise three functional modules to handle the multi-stream and
multi-scale features in a collaborative learning scheme. (3) Confronted with
the challenge of obtaining forgery annotations, we propose a Semi-supervised
Patch Similarity Learning strategy to estimate patch-level forged location
annotations. Empirically, our method demonstrates significantly improved
robustness and generalizability, outperforming previous methods on six
benchmarks, and improving the frame-level AUC on Deepfake Detection Challenge
preview dataset from 0.797 to 0.835 and video-level AUC on CelebDFv1
dataset from 0.811 to 0.847. Our implementation is available at
https://github.com/sccsok/Locate-and-Verify.Comment: 10 pages, 8 figures, 60 references. This paper has been accepted for
ACM MM 202
Identification of British one pound counterfeit coins using laser-induced breakdown spectroscopy
Acknowledgments The authors are grateful to Robert Matthews, C.Chem., MRSC for his generous loan of seven of the counterfeit coins.Peer reviewedPublisher PD
Learning Second Order Local Anomaly for General Face Forgery Detection
In this work, we propose a novel method to improve the generalization ability
of CNN-based face forgery detectors. Our method considers the feature anomalies
of forged faces caused by the prevalent blending operations in face forgery
algorithms. Specifically, we propose a weakly supervised Second Order Local
Anomaly (SOLA) learning module to mine anomalies in local regions using deep
feature maps. SOLA first decomposes the neighborhood of local features by
different directions and distances and then calculates the first and second
order local anomaly maps which provide more general forgery traces for the
classifier. We also propose a Local Enhancement Module (LEM) to improve the
discrimination between local features of real and forged regions, so as to
ensure accuracy in calculating anomalies. Besides, an improved Adaptive Spatial
Rich Model (ASRM) is introduced to help mine subtle noise features via
learnable high pass filters. With neither pixel level annotations nor external
synthetic data, our method using a simple ResNet18 backbone achieves
competitive performances compared with state-of-the-art works when evaluated on
unseen forgeries
UCF: Uncovering Common Features for Generalizable Deepfake Detection
Deepfake detection remains a challenging task due to the difficulty of
generalizing to new types of forgeries. This problem primarily stems from the
overfitting of existing detection methods to forgery-irrelevant features and
method-specific patterns. The latter has been rarely studied and not well
addressed by previous works. This paper presents a novel approach to address
the two types of overfitting issues by uncovering common forgery features.
Specifically, we first propose a disentanglement framework that decomposes
image information into three distinct components: forgery-irrelevant,
method-specific forgery, and common forgery features. To ensure the decoupling
of method-specific and common forgery features, a multi-task learning strategy
is employed, including a multi-class classification that predicts the category
of the forgery method and a binary classification that distinguishes the real
from the fake. Additionally, a conditional decoder is designed to utilize
forgery features as a condition along with forgery-irrelevant features to
generate reconstructed images. Furthermore, a contrastive regularization
technique is proposed to encourage the disentanglement of the common and
specific forgery features. Ultimately, we only utilize the common forgery
features for the purpose of generalizable deepfake detection. Extensive
evaluations demonstrate that our framework can perform superior generalization
than current state-of-the-art methods
Enhancing General Face Forgery Detection via Vision Transformer with Low-Rank Adaptation
Nowadays, forgery faces pose pressing security concerns over fake news,
fraud, impersonation, etc. Despite the demonstrated success in intra-domain
face forgery detection, existing detection methods lack generalization
capability and tend to suffer from dramatic performance drops when deployed to
unforeseen domains. To mitigate this issue, this paper designs a more general
fake face detection model based on the vision transformer(ViT) architecture. In
the training phase, the pretrained ViT weights are freezed, and only the
Low-Rank Adaptation(LoRA) modules are updated. Additionally, the Single Center
Loss(SCL) is applied to supervise the training process, further improving the
generalization capability of the model. The proposed method achieves
state-of-the-arts detection performances in both cross-manipulation and
cross-dataset evaluations
DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection
A critical yet frequently overlooked challenge in the field of deepfake
detection is the lack of a standardized, unified, comprehensive benchmark. This
issue leads to unfair performance comparisons and potentially misleading
results. Specifically, there is a lack of uniformity in data processing
pipelines, resulting in inconsistent data inputs for detection models.
Additionally, there are noticeable differences in experimental settings, and
evaluation strategies and metrics lack standardization. To fill this gap, we
present the first comprehensive benchmark for deepfake detection, called
DeepfakeBench, which offers three key contributions: 1) a unified data
management system to ensure consistent input across all detectors, 2) an
integrated framework for state-of-the-art methods implementation, and 3)
standardized evaluation metrics and protocols to promote transparency and
reproducibility. Featuring an extensible, modular-based codebase, DeepfakeBench
contains 15 state-of-the-art detection methods, 9 deepfake datasets, a series
of deepfake detection evaluation protocols and analysis tools, as well as
comprehensive evaluations. Moreover, we provide new insights based on extensive
analysis of these evaluations from various perspectives (e.g., data
augmentations, backbones). We hope that our efforts could facilitate future
research and foster innovation in this increasingly critical domain. All codes,
evaluations, and analyses of our benchmark are publicly available at
https://github.com/SCLBD/DeepfakeBench
- …