Multi-domain image-to-image (I2I) translations can transform a source image
according to the style of a target domain. One important, desired
characteristic of these transformations, is their graduality, which corresponds
to a smooth change between the source and the target image when their
respective latent-space representations are linearly interpolated. However,
state-of-the-art methods usually perform poorly when evaluated using
inter-domain interpolations, often producing abrupt changes in the appearance
or non-realistic intermediate images. In this paper, we argue that one of the
main reasons behind this problem is the lack of sufficient inter-domain
training data and we propose two different regularization methods to alleviate
this issue: a new shrinkage loss, which compacts the latent space, and a Mixup
data-augmentation strategy, which flattens the style representations between
domains. We also propose a new metric to quantitatively evaluate the degree of
the interpolation smoothness, an aspect which is not sufficiently covered by
the existing I2I translation metrics. Using both our proposed metric and
standard evaluation protocols, we show that our regularization techniques can
improve the state-of-the-art multi-domain I2I translations by a large margin.
Our code will be made publicly available upon the acceptance of this article