Unsupervised learning allows us to leverage unlabelled data, which has become
abundantly available, and to create embeddings that are usable on a variety of
downstream tasks. However, the typical lack of interpretability of unsupervised
representation learning has become a limiting factor with regard to recent
transparent-AI regulations. In this paper, we study graph representation
learning and we show that data augmentation that preserves semantics can be
learned and used to produce interpretations. Our framework, which we named
INGENIOUS, creates inherently interpretable embeddings and eliminates the need
for costly additional post-hoc analysis. We also introduce additional metrics
addressing the lack of formalism and metrics in the understudied area of
unsupervised-representation learning interpretability. Our results are
supported by an experimental study applied to both graph-level and node-level
tasks and show that interpretable embeddings provide state-of-the-art
performance on subsequent downstream tasks