Conditional generative models became a very powerful tool to sample from
Bayesian inverse problem posteriors. It is well-known in classical Bayesian
literature that posterior measures are quite robust with respect to
perturbations of both the prior measure and the negative log-likelihood, which
includes perturbations of the observations. However, to the best of our
knowledge, the robustness of conditional generative models with respect to
perturbations of the observations has not been investigated yet. In this paper,
we prove for the first time that appropriately learned conditional generative
models provide robust results for single observations