161,452 research outputs found
Galerkin approximations for the optimal control of nonlinear delay differential equations
Optimal control problems of nonlinear delay differential equations (DDEs) are
considered for which we propose a general Galerkin approximation scheme built
from Koornwinder polynomials. Error estimates for the resulting
Galerkin-Koornwinder approximations to the optimal control and the value
function, are derived for a broad class of cost functionals and nonlinear DDEs.
The approach is illustrated on a delayed logistic equation set not far away
from its Hopf bifurcation point in the parameter space. In this case, we show
that low-dimensional controls for a standard quadratic cost functional can be
efficiently computed from Galerkin-Koornwinder approximations to reduce at a
nearly optimal cost the oscillation amplitude displayed by the DDE's solution.
Optimal controls computed from the Pontryagin's maximum principle (PMP) and the
Hamilton-Jacobi-Bellman equation (HJB) associated with the corresponding ODE
systems, are shown to provide numerical solutions in good agreement. It is
finally argued that the value function computed from the corresponding reduced
HJB equation provides a good approximation of that obtained from the full HJB
equation.Comment: 29 pages. This is a sequel of the arXiv preprint arXiv:1704.0042
Optimal control of an Allen-Cahn equation with singular potentials and dynamic boundary condition
In this paper, we investigate optimal control problems for Allen-Cahn
equations with singular nonlinearities and a dynamic boundary condition
involving singular nonlinearities and the Laplace-Beltrami operator. The
approach covers both the cases of distributed controls and of boundary
controls. The cost functional is of standard tracking type, and box constraints
for the controls are prescribed. Parabolic problems with nonlinear dynamic
boundary conditions involving the Laplace-Beltrami operation have recently
drawn increasing attention due to their importance in applications, while their
optimal control was apparently never studied before. In this paper, we first
extend known well-posedness and regularity results for the state equation and
then show the existence of optimal controls and that the control-to-state
mapping is twice continuously Fr\'echet differentiable between appropriate
function spaces. Based on these results, we establish the first-order necessary
optimality conditions in terms of a variational inequality and the adjoint
state equation, and we prove second-order sufficient optimality conditions.Comment: Key words: optimal control; parabolic problems; dynamic boundary
conditions; optimality condition
A study of the application of singular perturbation theory
A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions
Lipschitzian Regularity of the Minimizing Trajectories for Nonlinear Optimal Control Problems
We consider the Lagrange problem of optimal control with unrestricted
controls and address the question: under what conditions we can assure optimal
controls are bounded? This question is related to the one of Lipschitzian
regularity of optimal trajectories, and the answer to it is crucial for closing
the gap between the conditions arising in the existence theory and necessary
optimality conditions. Rewriting the Lagrange problem in a parametric form, we
obtain a relation between the applicability conditions of the Pontryagin
maximum principle to the later problem and the Lipschitzian regularity
conditions for the original problem. Under the standard hypotheses of
coercivity of the existence theory, the conditions imply that the optimal
controls are essentially bounded, assuring the applicability of the classical
necessary optimality conditions like the Pontryagin maximum principle. The
result extends previous Lipschitzian regularity results to cover optimal
control problems with general nonlinear dynamics.Comment: This research was partially presented, as an oral communication, at
the international conference EQUADIFF 10, Prague, August 27-31, 2001.
Accepted for publication in the journal Mathematics of Control, Signals, and
Systems (MCSS). See http://www.mat.ua.pt/delfim for other work
Approximate Dynamic Programming with Gaussian Processes
In general, it is difficult to determine an optimal closed-loop policy in nonlinear control problems with continuous-valued state and control domains. Hence, approximations are often inevitable. The standard method of discretizing states and controls suffers from the curse of dimensionality and strongly depends on the chosen temporal sampling rate. In this paper, we introduce Gaussian process dynamic programming (GPDP) and determine an approximate globally optimal closed-loop policy. In GPDP, value functions in the Bellman recursion of the dynamic programming algorithm are modeled using Gaussian processes. GPDP returns an optimal statefeedback for a finite set of states. Based on these outcomes, we learn a possibly discontinuous closed-loop policy on the entire state space by switching between two independently trained Gaussian processes. A binary classifier selects one Gaussian process to predict the optimal control signal. We show that GPDP is able to yield an almost optimal solution to an LQ problem using few sample points. Moreover, we successfully apply GPDP to the underpowered pendulum swing up, a complex nonlinear control problem
Analysis and optimal boundary control of a nonstandard system of phase field equations
We investigate a nonstandard phase field model of Cahn-Hilliard type. The
model describes two-species phase segregation and consists of a system of two
highly nonlinearly coupled PDEs. It has been studied recently in the papers
arXiv:1103.4585 and arXiv:1109.3303 for the case of homogeneous Neumann
boundary conditions. In this paper, we investigate the case that the boundary
condition for one of the unknowns of the system is of third kind and
nonhomogeneous. For the resulting system, we show well-posedness, and we study
optimal boundary control problems. Existence of optimal controls is shown, and
the first-order necessary optimality conditions are derived. Owing to the
strong nonlinear couplings in the PDE system, standard arguments of optimal
control theory do not apply directly, although the control constraints and the
cost functional will be of standard type.Comment: Key words: nonlinear phase field systems, Cahn-Hilliard systems,
parabolic systems, optimal boundary control, first-order necessary optimality
conditions. The interested reader can also see the preprint arXiv:1106.3668
where a distributed optimal control problem is studied for a similar system.
arXiv admin note: significant text overlap with arXiv:1106.366
- …