4,905 research outputs found
Asymptotic Preserving time-discretization of optimal control problems for the Goldstein-Taylor model
We consider the development of implicit-explicit time integration schemes for
optimal control problems governed by the Goldstein-Taylor model. In the
diffusive scaling this model is a hyperbolic approximation to the heat
equation. We investigate the relation of time integration schemes and the
formal Chapman-Enskog type limiting procedure. For the class of stiffly
accurate implicit-explicit Runge-Kutta methods (IMEX) the discrete optimality
system also provides a stable numerical method for optimal control problems
governed by the heat equation. Numerical examples illustrate the expected
behavior
Optimal mistuning for enhanced aeroelastic stability of transonic fans
An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom
Discrete mechanics and optimal control: An analysis
The optimal control of a mechanical system is of crucial importance in many application areas. Typical examples are the determination of a time-minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. In most cases, some sort of discretization of the original, infinite-dimensional optimization problem has to be performed in order to make the problem amenable to computations. The approach proposed in this paper is to directly discretize the variational description of the system's motion. The resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. We show that the DMOC (Discrete Mechanics and Optimal Control) approach is equivalent to a finite difference discretization of Hamilton's equations by a symplectic partitioned Runge-Kutta scheme and employ this fact in order to give a proof of convergence. The numerical performance of DMOC and its relationship to other existing optimal control methods are investigated
Implicit-Explicit Runge-Kutta schemes for numerical discretization of optimal control problems
Implicit-explicit (IMEX) Runge-Kutta methods play a major rule in the
numerical treatment of differential systems governed by stiff and non-stiff
terms. This paper discusses order conditions and symplecticity properties of a
class of IMEX Runge-Kutta methods in the context of optimal control problems.
The analysis of the schemes is based on the continuous optimality system. Using
suitable transformations of the adjoint equation, order conditions up to order
three are proven as well as the relation between adjoint schemes obtained
through different transformations is investigated. Conditions for the IMEX
Runge-Kutta methods to be symplectic are also derived. A numerical example
illustrating the theoretical properties is presented
High order variational integrators in the optimal control of mechanical systems
In recent years, much effort in designing numerical methods for the
simulation and optimization of mechanical systems has been put into schemes
which are structure preserving. One particular class are variational
integrators which are momentum preserving and symplectic. In this article, we
develop two high order variational integrators which distinguish themselves in
the dimension of the underling space of approximation and we investigate their
application to finite-dimensional optimal control problems posed with
mechanical systems. The convergence of state and control variables of the
approximated problem is shown. Furthermore, by analyzing the adjoint systems of
the optimal control problem and its discretized counterpart, we prove that, for
these particular integrators, dualization and discretization commute.Comment: 25 pages, 9 figures, 1 table, submitted to DCDS-
Linear multistep methods for optimal control problems and applications to hyperbolic relaxation systems
We are interested in high-order linear multistep schemes for time
discretization of adjoint equations arising within optimal control problems.
First we consider optimal control problems for ordinary differential equations
and show loss of accuracy for Adams-Moulton and Adams-Bashford methods, whereas
BDF methods preserve high--order accuracy. Subsequently we extend these results
to semi--lagrangian discretizations of hyperbolic relaxation systems.
Computational results illustrate theoretical findings
Space-time adaptive solution of inverse problems with the discrete adjoint method
Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method.
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the intergrid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided for the discontinuous Galerkin (DG) method. The adjoint model development is considerably simplified by decoupling the adaptive mesh refinement mechanism from the forward model solver, and by selectively applying automatic differentiation on individual algorithms.
In forward models discontinuous Galerkin discretizations can efficiently handle high orders of accuracy, -refinement, and parallel computation. The analysis reveals that this approach, paired with Runge Kutta time stepping, is well suited for the adaptive solutions of inverse problems. The usefulness of discrete discontinuous Galerkin adjoints is illustrated on a two-dimensional adaptive data assimilation problem
- …