16,473 research outputs found
Second order adjoints for solving PDE-constrained optimization problems
Inverse problems are of utmost importance in many fields of science and engineering. In the
variational approach inverse problems are formulated as PDE-constrained optimization problems,
where the optimal estimate of the uncertain parameters is the minimizer of a certain cost
functional subject to the constraints posed by the model equations. The numerical solution
of such optimization problems requires the computation of derivatives of the model output
with respect to model parameters. The first order derivatives of a cost functional (defined
on the model output) with respect to a large number of model parameters can be calculated
efficiently through first order adjoint sensitivity analysis. Second order adjoint models
give second derivative information in the form of matrix-vector products between the Hessian
of the cost functional and user defined vectors. Traditionally, the construction of second
order derivatives for large scale models has been considered too costly. Consequently, data
assimilation applications employ optimization algorithms that use only first order derivative
information, like nonlinear conjugate gradients and quasi-Newton methods.
In this paper we discuss the mathematical foundations of second order adjoint sensitivity
analysis and show that it provides an efficient approach to obtain Hessian-vector products. We
study the benefits of using of second order information in the numerical optimization process
for data assimilation applications. The numerical studies are performed in a twin experiment
setting with a two-dimensional shallow water model. Different scenarios are considered with
different discretization approaches, observation sets, and noise levels. Optimization algorithms
that employ second order derivatives are tested against widely used methods that require
only first order derivatives. Conclusions are drawn regarding the potential benefits and the
limitations of using high-order information in large scale data assimilation problems
Comparison of POD reduced order strategies for the nonlinear 2D Shallow Water Equations
This paper introduces tensorial calculus techniques in the framework of
Proper Orthogonal Decomposition (POD) to reduce the computational complexity of
the reduced nonlinear terms. The resulting method, named tensorial POD, can be
applied to polynomial nonlinearities of any degree . Such nonlinear terms
have an on-line complexity of , where is the
dimension of POD basis, and therefore is independent of full space dimension.
However it is efficient only for quadratic nonlinear terms since for higher
nonlinearities standard POD proves to be less time consuming once the POD basis
dimension is increased. Numerical experiments are carried out with a two
dimensional shallow water equation (SWE) test problem to compare the
performance of tensorial POD, standard POD, and POD/Discrete Empirical
Interpolation Method (DEIM). Numerical results show that tensorial POD
decreases by times the computational cost of the on-line stage of
standard POD for configurations using more than model variables. The
tensorial POD SWE model was only slower than the POD/DEIM SWE model
but the implementation effort is considerably increased. Tensorial calculus was
again employed to construct a new algorithm allowing POD/DEIM shallow water
equation model to compute its off-line stage faster than the standard and
tensorial POD approaches.Comment: 23 pages, 8 figures, 5 table
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods are effective tools for solving
systems of nonlinear equations. They use quasi-Newton updates for approximating
the Jacobian matrix. Owing to their ability to more completely utilize the
information about the Jacobian matrix gathered at the previous iterations,
these methods are especially efficient in the case of expensive functions. They
are known to be local and superlinearly convergent. We combine these methods
with the nonmonotone line search proposed by Li and Fukushima (2000), and study
global and superlinear convergence of this combination. Results of numerical
experiments are presented. They indicate that the multipoint secant and
interpolation methods tend to be more robust and efficient than Broyden's
method globalized in the same way
Composing Scalable Nonlinear Algebraic Solvers
Most efficient linear solvers use composable algorithmic components, with the
most common model being the combination of a Krylov accelerator and one or more
preconditioners. A similar set of concepts may be used for nonlinear algebraic
systems, where nonlinear composition of different nonlinear solvers may
significantly improve the time to solution. We describe the basic concepts of
nonlinear composition and preconditioning and present a number of solvers
applicable to nonlinear partial differential equations. We have developed a
software framework in order to easily explore the possible combinations of
solvers. We show that the performance gains from using composed solvers can be
substantial compared with gains from standard Newton-Krylov methods.Comment: 29 pages, 14 figures, 13 table
- …