2,069 research outputs found
Adaptive operator learning for infinite-dimensional Bayesian inverse problems
The fundamental computational issues in Bayesian inverse problems (BIPs)
governed by partial differential equations (PDEs) stem from the requirement of
repeated forward model evaluations. A popular strategy to reduce such cost is
to replace expensive model simulations by computationally efficient
approximations using operator learning, motivated by recent progresses in deep
learning. However, using the approximated model directly may introduce a
modeling error, exacerbating the already ill-posedness of inverse problems.
Thus, balancing between accuracy and efficiency is essential for the effective
implementation of such approaches. To this end, we develop an adaptive operator
learning framework that can reduce modeling error gradually by forcing the
surrogate to be accurate in local areas. This is accomplished by fine-tuning
the pre-trained approximate model during the inversion process with adaptive
points selected by a greedy algorithm, which requires only a few forward model
evaluations. To validate our approach, we adopt DeepOnet to construct the
surrogate and use unscented Kalman inversion (UKI) to approximate the solution
of BIPs, respectively. Furthermore, we present rigorous convergence guarantee
in the linear case using the framework of UKI. We test the approach on several
benchmarks, including the Darcy flow, the heat source inversion problem, and
the reaction diffusion problems. Numerical results demonstrate that our method
can significantly reduce computational costs while maintaining inversion
accuracy
Adaptive weighting of Bayesian physics informed neural networks for multitask and multiscale forward and inverse problems
In this paper, we present a novel methodology for automatic adaptive
weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we
demonstrate that this makes it possible to robustly address multi-objective and
multi-scale problems. BPINNs are a popular framework for data assimilation,
combining the constraints of Uncertainty Quantification (UQ) and Partial
Differential Equation (PDE). The relative weights of the BPINN target
distribution terms are directly related to the inherent uncertainty in the
respective learning tasks. Yet, they are usually manually set a-priori, that
can lead to pathological behavior, stability concerns, and to conflicts between
tasks which are obstacles that have deterred the use of BPINNs for inverse
problems with multi-scale dynamics. The present weighting strategy
automatically tunes the weights by considering the multi-task nature of target
posterior distribution. We show that this remedies the failure modes of BPINNs
and provides efficient exploration of the optimal Pareto front. This leads to
better convergence and stability of BPINN training while reducing sampling
bias. The determined weights moreover carry information about task
uncertainties, reflecting noise levels in the data and adequacy of the PDE
model. We demonstrate this in numerical experiments in Sobolev training, and
compare them to analytically -optimal baseline, and in a multi-scale
Lokta-Volterra inverse problem. We eventually apply this framework to an
inpainting task and an inverse problem, involving latent field recovery for
incompressible flow in complex geometries
- …