Advanced image tampering techniques are increasingly challenging the
trustworthiness of multimedia, leading to the development of Image Manipulation
Localization (IML). But what makes a good IML model? The answer lies in the way
to capture artifacts. Exploiting artifacts requires the model to extract
non-semantic discrepancies between manipulated and authentic regions,
necessitating explicit comparisons between the two areas. With the
self-attention mechanism, naturally, the Transformer should be a better
candidate to capture artifacts. However, due to limited datasets, there is
currently no pure ViT-based approach for IML to serve as a benchmark, and CNNs
dominate the entire task. Nevertheless, CNNs suffer from weak long-range and
non-semantic modeling. To bridge this gap, based on the fact that artifacts are
sensitive to image resolution, amplified under multi-scale features, and
massive at the manipulation border, we formulate the answer to the former
question as building a ViT with high-resolution capacity, multi-scale feature
extraction capability, and manipulation edge supervision that could converge
with a small amount of data. We term this simple but effective ViT paradigm
IML-ViT, which has significant potential to become a new benchmark for IML.
Extensive experiments on five benchmark datasets verified our model outperforms
the state-of-the-art manipulation localization methods.Code and models are
available at \url{https://github.com/SunnyHaze/IML-ViT}