14 research outputs found

    A HJB-POD approach for the control of nonlinear PDEs on a tree structure

    Get PDF
    The Dynamic Programming approach allows to compute a feedback control for nonlinear problems, but suffers from the curse of dimensionality. The computation of the control relies on the resolution of a nonlinear PDE, the Hamilton-Jacobi-Bellman equation, with the same dimension of the original problem. Recently, a new numerical method to compute the value function on a tree structure has been introduced. The method allows to work without a structured grid and avoids any interpolation. Here, we aim to test the algorithm for nonlinear two dimensional PDEs. We apply model order reduction to decrease the computational complexity since the tree structure algorithm requires to solve many PDEs. Furthermore, we prove an error estimate which guarantees the convergence of the proposed method. Finally, we show efficiency of the method through numerical tests

    Error estimates for a tree structure algorithm solving finite horizon control problems

    Full text link
    In the Dynamic Programming approach to optimal control problems a crucial role is played by the value function that is characterized as the unique viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. It is well known that this approach suffers of the "curse of dimensionality" and this limitation has reduced its practical in real world applications. Here we analyze a dynamic programming algorithm based on a tree structure. The tree is built by the time discrete dynamics avoiding in this way the use of a fixed space grid which is the bottleneck for high-dimensional problems, this also drops the projection on the grid in the approximation of the value function. We present some error estimates for a first order approximation based on the tree-structure algorithm. Moreover, we analyze a pruning technique for the tree to reduce the complexity and minimize the computational effort. Finally, we present some numerical tests

    Approximation of Optimal Control Problems for the Navier-Stokes equation via multilinear HJB-POD

    Get PDF
    We consider the approximation of some optimal control problems for the Navier-Stokes equation via a Dynamic Programming approach. These control problems arise in many industrial applications and are very challenging from the numerical point of view since the semi-discretization of the dynamics corresponds to an evolutive system of ordinary differential equations in very high dimension. The typical approach is based on the Pontryagin maximum principle and leads to a two point boundary value problem. Here we present a different approach based on the value function and the solution of a Bellman, a challenging problem in high dimension. We mitigate the curse of dimensionality via a recent multilinear approximation of the dynamics coupled with a dynamic programming scheme on a tree structure. We discuss several aspects related to the implementation of this new approach and we present some numerical examples to illustrate the results on classical control problems studied in the literature

    Statistical Proper Orthogonal Decomposition for model reduction in feedback control

    Full text link
    Feedback control synthesis for nonlinear, parameter-dependent fluid flow control problems is considered. The optimal feedback law requires the solution of the Hamilton-Jacobi-Bellman (HJB) PDE suffering the curse of dimensionality. This is mitigated by Model Order Reduction (MOR) techniques, where the system is projected onto a lower-dimensional subspace, over which the feedback synthesis becomes feasible. However, existing MOR methods assume at least one relaxation of generality, that is, the system should be linear, or stable, or deterministic. We propose a MOR method called Statistical POD (SPOD), which is inspired by the Proper Orthogonal Decomposition (POD), but extends to more general systems. Random samples of the original dynamical system are drawn, treating time and initial condition as random variables similarly to possible parameters in the model, and employing a stabilizing closed-loop control. The reduced subspace is chosen to minimize the empirical risk, which is shown to estimate the expected risk of the MOR solution with respect to the distribution of all possible outcomes of the controlled system. This reduced model is then used to compute a surrogate of the feedback control function in the Tensor Train (TT) format that is computationally fast to evaluate online. Using unstable Burgers' and Navier-Stokes equations, it is shown that the SPOD control is more accurate than Linear Quadratic Regulator or optimal control derived from a model reduced onto the standard POD basis, and faster than the direct optimal control of the original system

    Feedback reconstruction techniques for optimal control problems on a tree structure

    Get PDF
    The computation of feedback control using Dynamic Programming equation is a difficult task due the curse of dimensionality. The tree structure algorithm is one the methods introduced recently that mitigate this problem. The method computes the value function avoiding the construction of a space grid and the need for interpolation techniques using a discrete set of controls. However, the computation of the control is strictly linked to control set chosen in the computation of the tree. Here, we extend and complete the method selecting a finer control set in the computation of the feedback. This requires to use an interpolation method for scattered data which allows us to reconstruct the value function for nodes not belonging to the tree. The effectiveness of the method is shown via a numerical example

    Separable approximations of optimal value functions under a decaying sensitivity assumption

    Full text link
    A new approach for the construction of separable approximations of optimal value functions from interconnected optimal control problems is presented. The approach is based on assuming decaying sensitivities between subsystems, enabling a curse-of-dimensionality free approximation, for instance by deep neural networks

    Error Estimates for a Tree Structure Algorithm Solving Finite Horizon Control Problems

    No full text
    In the dynamic programming approach to optimal control problems a crucial role is played by the value function that is characterized as the unique viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. It is well known that this approach suffers from the “curse of dimensionality” and this limitation has reduced its use in real world applications. Here, we analyze a dynamic programming algorithm based on a tree structure to mitigate the “curse of dimensionality”. The tree is built by the discrete time dynamics avoiding the use of a fixed space grid which is the bottleneck for highdimensional problems, this also drops the projection on the grid in the approximation of the value function. In this work, we present first order error estimates for the the approximation of the value function based on the tree-structure algorithm. The estimate turns out to have the same order of convergence of the numerical method used for the approximation of the dynamics. Furthermore, we analyze a pruning technique for the tree to reduce the complexity and minimize the computational effort. Finally, we present some numerical tests to show the theoretical results

    Approximation of optimal control problems for the Navier-Stokes equation via multilinear HJB-POD

    No full text
    We consider the approximation of some optimal control problems for the Navier-Stokes equation via a Dynamic Programming approach. These control problems arise in many industrial applications and are very challenging from the numerical point of view since the semi-discretization of the dynamics corresponds to an evolutive system of ordinary differential equations in very high-dimension. The typical approach is based on the Pontryagin maximum principle and leads to a two point boundary value problem. Here we present a different approach based on the value function and the solution of a Bellman equation, a challenging problem in high-dimension. We mitigate the curse of dimensionality via a recent multilinear approximation of the dynamics coupled with a dynamic programming scheme on a tree structure. We discuss several aspects related to the implementation of this new approach and we present some numerical examples to illustrate the results on classical control problems studied in the literature
    corecore