Exploring Fairness in Pre-trained Visual Transformer based Natural and
GAN Generated Image Detection Systems and Understanding the Impact of Image
Compression in Fairness
It is not only sufficient to construct computational models that can
accurately classify or detect fake images from real images taken from a camera,
but it is also important to ensure whether these computational models are fair
enough or produce biased outcomes that can eventually harm certain social
groups or cause serious security threats. Exploring fairness in forensic
algorithms is an initial step towards correcting these biases. Since visual
transformers are recently being widely used in most image classification based
tasks due to their capability to produce high accuracies, this study tries to
explore bias in the transformer based image forensic algorithms that classify
natural and GAN generated images. By procuring a bias evaluation corpora, this
study analyzes bias in gender, racial, affective, and intersectional domains
using a wide set of individual and pairwise bias evaluation measures. As the
generalizability of the algorithms against image compression is an important
factor to be considered in forensic tasks, this study also analyzes the role of
image compression on model bias. Hence to study the impact of image compression
on model bias, a two phase evaluation setting is followed, where a set of
experiments is carried out in the uncompressed evaluation setting and the other
in the compressed evaluation setting