23,748 research outputs found

    Combining stochastic programming and optimal control to solve multistage stochastic optimization problems

    Get PDF
    In this contribution we propose an approach to solve a multistage stochastic programming problem which allows us to obtain a time and nodal decomposition of the original problem. This double decomposition is achieved applying a discrete time optimal control formulation to the original stochastic programming problem in arborescent form. Combining the arborescent formulation of the problem with the point of view of the optimal control theory naturally gives as a first result the time decomposability of the optimality conditions, which can be organized according to the terminology and structure of a discrete time optimal control problem into the systems of equation for the state and adjoint variables dynamics and the optimality conditions for the generalized Hamiltonian. Moreover these conditions, due to the arborescent formulation of the stochastic programming problem, further decompose with respect to the nodes in the event tree. The optimal solution is obtained by solving small decomposed subproblems and using a mean valued fixed-point iterative scheme to combine them. To enhance the convergence we suggest an optimization step where the weights are chosen in an optimal way at each iteration.Stochastic programming, discrete time control problem, decomposition methods, iterative scheme

    Discrete mechanics and optimal control: An analysis

    Get PDF
    The optimal control of a mechanical system is of crucial importance in many application areas. Typical examples are the determination of a time-minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. In most cases, some sort of discretization of the original, infinite-dimensional optimization problem has to be performed in order to make the problem amenable to computations. The approach proposed in this paper is to directly discretize the variational description of the system's motion. The resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. We show that the DMOC (Discrete Mechanics and Optimal Control) approach is equivalent to a finite difference discretization of Hamilton's equations by a symplectic partitioned Runge-Kutta scheme and employ this fact in order to give a proof of convergence. The numerical performance of DMOC and its relationship to other existing optimal control methods are investigated

    Strong Stationarity Conditions for Optimal Control of Hybrid Systems

    Full text link
    We present necessary and sufficient optimality conditions for finite time optimal control problems for a class of hybrid systems described by linear complementarity models. Although these optimal control problems are difficult in general due to the presence of complementarity constraints, we provide a set of structural assumptions ensuring that the tangent cone of the constraints possesses geometric regularity properties. These imply that the classical Karush-Kuhn-Tucker conditions of nonlinear programming theory are both necessary and sufficient for local optimality, which is not the case for general mathematical programs with complementarity constraints. We also present sufficient conditions for global optimality. We proceed to show that the dynamics of every continuous piecewise affine system can be written as the optimizer of a mathematical program which results in a linear complementarity model satisfying our structural assumptions. Hence, our stationarity results apply to a large class of hybrid systems with piecewise affine dynamics. We present simulation results showing the substantial benefits possible from using a nonlinear programming approach to the optimal control problem with complementarity constraints instead of a more traditional mixed-integer formulation.Comment: 30 pages, 4 figure

    Reduced Order Optimal Control of the Convective FitzHugh-Nagumo Equation

    Full text link
    In this paper, we compare three model order reduction methods: the proper orthogonal decomposition (POD), discrete empirical interpolation method (DEIM) and dynamic mode decomposition (DMD) for the optimal control of the convective FitzHugh-Nagumo (FHN) equations. The convective FHN equations consists of the semi-linear activator and the linear inhibitor equations, modeling blood coagulation in moving excitable media. The semilinear activator equation leads to a non-convex optimal control problem (OCP). The most commonly used method in reduced optimal control is POD. We use DEIM and DMD to approximate efficiently the nonlinear terms in reduced order models. We compare the accuracy and computational times of three reduced-order optimal control solutions with the full order discontinuous Galerkin finite element solution of the convection dominated FHN equations with terminal controls. Numerical results show that POD is the most accurate whereas POD-DMD is the fastest

    Optimal Control of Convective FitzHugh-Nagumo Equation

    Get PDF
    We investigate smooth and sparse optimal control problems for convective FitzHugh-Nagumo equation with travelling wave solutions in moving excitable media. The cost function includes distributed space-time and terminal observations or targets. The state and adjoint equations are discretized in space by symmetric interior point Galerkin (SIPG) method and by backward Euler method in time. Several numerical results are presented for the control of the travelling waves. We also show numerically the validity of the second order optimality conditions for the local solutions of the sparse optimal control problem for vanishing Tikhonov regularization parameter. Further, we estimate the distance between the discrete control and associated local optima numerically by the help of the perturbation method and the smallest eigenvalue of the reduced Hessian
    corecore