Since its appearance, Visual Question Answering (VQA, i.e. answering a
question posed over an image), has always been treated as a classification
problem over a set of predefined answers. Despite its convenience, this
classification approach poorly reflects the semantics of the problem limiting
the answering to a choice between independent proposals, without taking into
account the similarity between them (e.g. equally penalizing for answering cat
or German shepherd instead of dog). We address this issue by proposing (1) two
measures of proximity between VQA classes, and (2) a corresponding loss which
takes into account the estimated proximity. This significantly improves the
generalization of VQA models by reducing their language bias. In particular, we
show that our approach is completely model-agnostic since it allows consistent
improvements with three different VQA models. Finally, by combining our method
with a language bias reduction approach, we report SOTA-level performance on
the challenging VQAv2-CP dataset