812 research outputs found
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Recommended from our members
Data-driven reduction strategies for Bayesian inverse problems
A persistent central challenge in computational science and engineering (CSE), with both national and global security implications, is the efficient solution of large-scale Bayesian inverse problems. These problems range from estimating material parameters in subsurface simulations to estimating phenomenological parameters in climate models. Despite recent progress, our ability to quantify uncertainties and solve large-scale inverse problems lags well behind our ability to develop the governing forward simulations.
Inverse problems present unique computational challenges that are only magnified as we include larger observational data sets and demand higher-resolution parameter estimates. Even with the current state-of-the-art, solving deterministic large-scale inverse problems is prohibitively expensive. Large-scale uncertainty quantification (UQ), cast in the Bayesian inversion framework, is thus rendered intractable. To conquer these challenges, new methods that target the root causes of computational complexity are needed.
In this dissertation, we propose data-driven strategies for overcoming this “curse of di- mensionality.” First, we address the computational complexity induced in large-scale inverse problems by high-dimensional observational data. We propose a randomized misfit approach
(RMA), which uses random projections—quasi-orthogonal, information-preserving transformations—to map the high-dimensional data-misfit vector to a low-dimensional space. We provide the first theoretical explanation for why randomized misfit methods are successful in practice with a small reduced data-misfit dimension (n = O(1)).
Next, we develop the randomized geostatistical approach (RGA) for Bayesian sub- surface inverse problems with high-dimensional data. We show that the RGA is able to resolve transient groundwater inverse problems with noisy observed data dimensions up to 107, whereas a comparison method fails due to out-of-memory errors.
Finally, we address the solution of Bayesian inverse problems with spatially localized data. The motivation is CSE applications that would gain from high-fidelity estimation over a smaller data-local domain, versus expensive and uncertain estimation over the full simulation domain. We propose several truncated domain inversion methods using domain decomposition theory to build model-informed artificial boundary conditions. Numerical investigations of MAP estimation and sampling demonstrate improved fidelity and fewer partial differential equation (PDE) solves with our truncated methods.Computational Science, Engineering, and Mathematic
- …