87,090 research outputs found

    Controller Synthesis for Discrete-Time Polynomial Systems via Occupation Measures

    Full text link
    In this paper, we design nonlinear state feedback controllers for discrete-time polynomial dynamical systems via the occupation measure approach. We propose the discrete-time controlled Liouville equation, and use it to formulate the controller synthesis problem as an infinite-dimensional linear programming problem on measures, which is then relaxed as finite-dimensional semidefinite programming problems on moments of measures and their duals on sums-of-squares polynomials. Nonlinear controllers can be extracted from the solutions to the relaxed problems. The advantage of the occupation measure approach is that we solve convex problems instead of generally non-convex problems, and the computational complexity is polynomial in the state and input dimensions, and hence the approach is more scalable. In addition, we show that the approach can be applied to over-approximating the backward reachable set of discrete-time autonomous polynomial systems and the controllable set of discrete-time polynomial systems under known state feedback control laws. We illustrate our approach on several dynamical systems

    Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications

    Get PDF
    We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a ϵ\epsilon-suboptimal robust control policy with time complexity O(log1/ϵ)O(\log 1/\epsilon) times that for the non-robust case. We then implement this control policy on the original dynamical system
    corecore