Penalizing complexity (PC) priors is a principled framework for designing
priors that reduce model complexity. PC priors penalize the Kullback-Leibler
Divergence (KLD) between the distributions induced by a ``simple'' model and
that of a more complex model. However, in many common cases, it is impossible
to construct a prior in this way because the KLD is infinite. Various
approximations are used to mitigate this problem, but the resulting priors then
fail to follow the designed principles. We propose a new class of priors, the
Wasserstein complexity penalization (WCP) priors, by replacing KLD with the
Wasserstein distance in the PC prior framework. These priors avoid the infinite
model distance issues and can be derived by following the principles exactly,
making them more interpretable. Furthermore, principles and recipes to
construct joint WCP priors for multiple parameters analytically and numerically
are proposed and we show that they can be easily obtained, either numerically
or analytically, for a general class of models. The methods are illustrated
through several examples for which PC priors have previously been applied