38,235 research outputs found
Residual Minimizing Model Interpolation for Parameterized Nonlinear Dynamical Systems
We present a method for approximating the solution of a parameterized,
nonlinear dynamical system using an affine combination of solutions computed at
other points in the input parameter space. The coefficients of the affine
combination are computed with a nonlinear least squares procedure that
minimizes the residual of the governing equations. The approximation properties
of this residual minimizing scheme are comparable to existing reduced basis and
POD-Galerkin model reduction methods, but its implementation requires only
independent evaluations of the nonlinear forcing function. It is particularly
appropriate when one wishes to approximate the states at a few points in time
without time marching from the initial conditions. We prove some interesting
characteristics of the scheme including an interpolatory property, and we
present heuristics for mitigating the effects of the ill-conditioning and
reducing the overall cost of the method. We apply the method to representative
numerical examples from kinetics - a three state system with one parameter
controlling the stiffness - and conductive heat transfer - a nonlinear
parabolic PDE with a random field model for the thermal conductivity.Comment: 28 pages, 8 figures, 2 table
Second order adjoints for solving PDE-constrained optimization problems
Inverse problems are of utmost importance in many fields of science and engineering. In the
variational approach inverse problems are formulated as PDE-constrained optimization problems,
where the optimal estimate of the uncertain parameters is the minimizer of a certain cost
functional subject to the constraints posed by the model equations. The numerical solution
of such optimization problems requires the computation of derivatives of the model output
with respect to model parameters. The first order derivatives of a cost functional (defined
on the model output) with respect to a large number of model parameters can be calculated
efficiently through first order adjoint sensitivity analysis. Second order adjoint models
give second derivative information in the form of matrix-vector products between the Hessian
of the cost functional and user defined vectors. Traditionally, the construction of second
order derivatives for large scale models has been considered too costly. Consequently, data
assimilation applications employ optimization algorithms that use only first order derivative
information, like nonlinear conjugate gradients and quasi-Newton methods.
In this paper we discuss the mathematical foundations of second order adjoint sensitivity
analysis and show that it provides an efficient approach to obtain Hessian-vector products. We
study the benefits of using of second order information in the numerical optimization process
for data assimilation applications. The numerical studies are performed in a twin experiment
setting with a two-dimensional shallow water model. Different scenarios are considered with
different discretization approaches, observation sets, and noise levels. Optimization algorithms
that employ second order derivatives are tested against widely used methods that require
only first order derivatives. Conclusions are drawn regarding the potential benefits and the
limitations of using high-order information in large scale data assimilation problems
Reduced Order Modeling for Nonlinear PDE-constrained Optimization using Neural Networks
Nonlinear model predictive control (NMPC) often requires real-time solution
to optimization problems. However, in cases where the mathematical model is of
high dimension in the solution space, e.g. for solution of partial differential
equations (PDEs), black-box optimizers are rarely sufficient to get the
required online computational speed. In such cases one must resort to
customized solvers. This paper present a new solver for nonlinear
time-dependent PDE-constrained optimization problems. It is composed of a
sequential quadratic programming (SQP) scheme to solve the PDE-constrained
problem in an offline phase, a proper orthogonal decomposition (POD) approach
to identify a lower dimensional solution space, and a neural network (NN) for
fast online evaluations. The proposed method is showcased on a regularized
least-square optimal control problem for the viscous Burgers' equation. It is
concluded that significant online speed-up is achieved, compared to
conventional methods using SQP and finite elements, at a cost of a prolonged
offline phase and reduced accuracy.Comment: Accepted for publishing at the 58th IEEE Conference on Decision and
Control, Nice, France, 11-13 December, https://cdc2019.ieeecss.org
Elimination of the spin supplementary condition in the effective field theory approach to the post-Newtonian approximation
The present paper addresses open questions regarding the handling of the spin
supplementary condition within the effective field theory approach to the
post-Newtonian approximation. In particular it is shown how the covariant spin
supplementary condition can be eliminated at the level of the potential (which
is subtle in various respects) and how the dynamics can be cast into a fully
reduced Hamiltonian form. Two different methods are used and compared, one
based on the well-known Dirac bracket and the other based on an action
principle. It is discussed how the latter approach can be used to improve the
Feynman rules by formulating them in terms of reduced canonical spin variables.Comment: 42 pages, document changed to match published version, in press; Ann.
Phys. (N. Y.) (2012
- …