A common assumption in generative models is that the generator immerses the
latent space into a Euclidean ambient space. Instead, we consider the ambient
space to be a Riemannian manifold, which allows for encoding domain knowledge
through the associated Riemannian metric. Shortest paths can then be defined
accordingly in the latent space to both follow the learned manifold and respect
the ambient geometry. Through careful design of the ambient metric we can
ensure that shortest paths are well-behaved even for deterministic generators
that otherwise would exhibit a misleading bias. Experimentally we show that our
approach improves interpretability of learned representations both using
stochastic and deterministic generators