629 research outputs found

    Variational Methods for Evolution (hybrid meeting)

    Get PDF
    Variational principles for evolutionary systems take advantage of the rich toolbox provided by the theory of the calculus of variations. Such principles are available for Hamiltonian systems in classical mechanics, gradient flows for dissipative systems, but also time-incremental minimization techniques for more general evolutionary problems. The new challenges arise via the interplay of two or more functionals (e.g. a free energy and a dissipation potential), new structures (systems with nonlocal transport, gradient flows on graphs, kinetic equations, systems of equations) thus encompassing a large variety of applications in the modeling of materials and fluids, in biology, in multi-agent systems, and in data science. This workshop brought together a broad spectrum of researchers from calculus of variations, partial differential equations, metric geometry, and stochastics, as well as applied and computational scientists to discuss and exchange ideas. It focused on variational tools such as minimizing movement schemes, optimal transport, gradient flows, and large-deviation principles for time-continuous Markov processes, Γ\Gamma-convergence and homogenization

    Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

    Full text link
    We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems (BIPs) with models governed by nonlinear parametric partial differential equations (PDEs). Neural operators have gained significant attention in recent years for their ability to approximate the parameter-to-solution maps defined by PDEs using as training data solutions of PDEs at a limited number of parameter samples. The computational cost of BIPs can be drastically reduced if the large number of PDE solves required for posterior characterization are replaced with evaluations of trained neural operators. However, reducing error in the resulting BIP solutions via reducing the approximation error of the neural operators in training can be challenging and unreliable. We provide an a priori error bound result that implies certain BIPs can be ill-conditioned to the approximation error of neural operators, thus leading to inaccessible accuracy requirements in training. To reliably deploy neural operators in BIPs, we consider a strategy for enhancing the performance of neural operators, which is to correct the prediction of a trained neural operator by solving a linear variational problem based on the PDE residual. We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs. The strategy is applied to two numerical examples of BIPs based on a nonlinear reaction--diffusion problem and deformation of hyperelastic materials. We demonstrate that posterior representations of the two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction
    corecore