In this paper, we present a novel methodology for automatic adaptive
weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we
demonstrate that this makes it possible to robustly address multi-objective and
multi-scale problems. BPINNs are a popular framework for data assimilation,
combining the constraints of Uncertainty Quantification (UQ) and Partial
Differential Equation (PDE). The relative weights of the BPINN target
distribution terms are directly related to the inherent uncertainty in the
respective learning tasks. Yet, they are usually manually set a-priori, that
can lead to pathological behavior, stability concerns, and to conflicts between
tasks which are obstacles that have deterred the use of BPINNs for inverse
problems with multi-scale dynamics. The present weighting strategy
automatically tunes the weights by considering the multi-task nature of target
posterior distribution. We show that this remedies the failure modes of BPINNs
and provides efficient exploration of the optimal Pareto front. This leads to
better convergence and stability of BPINN training while reducing sampling
bias. The determined weights moreover carry information about task
uncertainties, reflecting noise levels in the data and adequacy of the PDE
model. We demonstrate this in numerical experiments in Sobolev training, and
compare them to analytically ϵ-optimal baseline, and in a multi-scale
Lokta-Volterra inverse problem. We eventually apply this framework to an
inpainting task and an inverse problem, involving latent field recovery for
incompressible flow in complex geometries