18 research outputs found

    S-Regular Controlled Plants

    No full text

    Optimal control and the Dynamic Programming Principle

    No full text
    This entry illustrates the application of Bellman’s dynamic programming principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law

    The Classical Homicidal Chauffeur Game

    No full text
    corecore