1,380 research outputs found
Space-time adaptive solution of inverse problems with the discrete adjoint method
Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method.
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the intergrid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided for the discontinuous Galerkin (DG) method. The adjoint model development is considerably simplified by decoupling the adaptive mesh refinement mechanism from the forward model solver, and by selectively applying automatic differentiation on individual algorithms.
In forward models discontinuous Galerkin discretizations can efficiently handle high orders of accuracy, -refinement, and parallel computation. The analysis reveals that this approach, paired with Runge Kutta time stepping, is well suited for the adaptive solutions of inverse problems. The usefulness of discrete discontinuous Galerkin adjoints is illustrated on a two-dimensional adaptive data assimilation problem
Dissipative numerical schemes on Riemannian manifolds with applications to gradient flows
This paper concerns an extension of discrete gradient methods to
finite-dimensional Riemannian manifolds termed discrete Riemannian gradients,
and their application to dissipative ordinary differential equations. This
includes Riemannian gradient flow systems which occur naturally in optimization
problems. The Itoh--Abe discrete gradient is formulated and applied to gradient
systems, yielding a derivative-free optimization algorithm. The algorithm is
tested on two eigenvalue problems and two problems from manifold valued
imaging: InSAR denoising and DTI denoising.Comment: Post-revision version. To appear in SIAM Journal on Scientific
Computin
Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations
Estimating numerical integration errors
Algorithm for use in estimating accumulated numerical integration error
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
The present article presents a summarizing view at differential-algebraic
equations (DAEs) and analyzes how new application fields and corresponding
mathematical models lead to innovations both in theory and in numerical
analysis for this problem class. Recent numerical methods for nonsmooth
dynamical systems subject to unilateral contact and friction illustrate the
topicality of this development.Comment: Preprint of Book Chapte
Bayesian Analysis of ODE's: solver optimal accuracy and Bayes factors
In most relevant cases in the Bayesian analysis of ODE inverse problems, a
numerical solver needs to be used. Therefore, we cannot work with the exact
theoretical posterior distribution but only with an approximate posterior
deriving from the error in the numerical solver. To compare a numerical and the
theoretical posterior distributions we propose to use Bayes Factors (BF),
considering both of them as models for the data at hand. We prove that the
theoretical vs a numerical posterior BF tends to 1, in the same order (of the
step size used) as the numerical forward map solver does. For higher order
solvers (eg. Runge-Kutta) the Bayes Factor is already nearly 1 for step sizes
that would take far less computational effort. Considerable CPU time may be
saved by using coarser solvers that nevertheless produce practically error free
posteriors. Two examples are presented where nearly 90% CPU time is saved while
all inference results are identical to using a solver with a much finer time
step.Comment: 28 pages, 6 figure
A fully discrete framework for the adaptive solution of inverse problems
We investigate and contrast the differences between the discretize-then-differentiate and differentiate-then-discretize approaches to the numerical solution of parameter estimation problems. The former approach is attractive in practice due to the use of automatic differentiation for the generation of the dual and optimality equations in the first-order KKT system. The latter strategy is more versatile, in that it allows one to formulate efficient mesh-independent algorithms over suitably chosen function spaces. However, it is significantly more difficult to implement, since automatic code generation is no longer an option. The starting point is a classical elliptic inverse problem. An a priori error analysis for the discrete optimality equation shows consistency and stability are not inherited automatically from the primal discretization. Similar to the concept of dual consistency, We introduce the concept of optimality consistency. However, the convergence properties can be restored through suitable consistent modifications of the target functional. Numerical tests confirm the theoretical convergence order for the optimal solution. We then derive a posteriori error estimates for the infinite dimensional optimal solution error, through a suitably chosen error functional. This estimates are constructed using second order derivative information for the target functional. For computational efficiency, the Hessian is replaced by a low order BFGS approximation. The efficiency of the error estimator is confirmed by a numerical experiment with multigrid optimization
Variational image regularization with Euler's elastica using a discrete gradient scheme
This paper concerns an optimization algorithm for unconstrained non-convex
problems where the objective function has sparse connections between the
unknowns. The algorithm is based on applying a dissipation preserving numerical
integrator, the Itoh--Abe discrete gradient scheme, to the gradient flow of an
objective function, guaranteeing energy decrease regardless of step size. We
introduce the algorithm, prove a convergence rate estimate for non-convex
problems with Lipschitz continuous gradients, and show an improved convergence
rate if the objective function has sparse connections between unknowns. The
algorithm is presented in serial and parallel versions. Numerical tests show
its use in Euler's elastica regularized imaging problems and its convergence
rate and compare the execution time of the method to that of the iPiano
algorithm and the gradient descent and Heavy-ball algorithms
- …