1 research outputs found

    Fully discrete schemes for monotone optimal control problems

    Full text link
    In this article we study a finite horizon optimal control problem with monotone controls. We consider the associated Hamilton-Jacobi-Bellman (HJB) equation which characterizes the value function. We consider the totally discretized problem by using the finite element method to approximate the state space Ω\Omega. The obtained problem is equivalent to the resolution of a finite sequence of stopping-time problems. The convergence orders of these approximations are proved, which are in general (h+kh)γ(h+\frac{k}{\sqrt{h}})^\gamma where γ\gamma is the H\"older constant of the value function uu. A special election of the relations between the parameters hh and kk allows to obtain a convergence of order k23γk^{\frac{2}{3}\gamma}, which is valid without semiconcavity hypotheses over the problem's data. We show also some numerical implementations in an example
    corecore