Although the latent spaces learned by distinct neural networks are not
generally directly comparable, recent work in machine learning has shown that
it is possible to use the similarities and differences among latent space
vectors to derive "relative representations" with comparable representational
power to their "absolute" counterparts, and which are nearly identical across
models trained on similar data distributions. Apart from their intrinsic
interest in revealing the underlying structure of learned latent spaces,
relative representations are useful to compare representations across networks
as a generic proxy for convergence, and for zero-shot model stitching.
In this work we examine an extension of relative representations to discrete
state-space models, using Clone-Structured Cognitive Graphs (CSCGs) for 2D
spatial localization and navigation as a test case. Our work shows that the
probability vectors computed during message passing can be used to define
relative representations on CSCGs, enabling effective communication across
agents trained using different random initializations and training sequences,
and on only partially similar spaces. We introduce a technique for zero-shot
model stitching that can be applied post hoc, without the need for using
relative representations during training. This exploratory work is intended as
a proof-of-concept for the application of relative representations to the study
of cognitive maps in neuroscience and AI.Comment: 19 pages, 1 table, 6 figures. Accepted paper at the 4th International
Workshop on Active Inference (Ghent, Belgium 2023