1,564 research outputs found

    Symbolic Controller Synthesis for B\"uchi Specifications on Stochastic Systems

    Get PDF
    We consider the policy synthesis problem for continuous-state controlled Markov processes evolving in discrete time, when the specification is given as a B\"uchi condition (visit a set of states infinitely often). We decompose computation of the maximal probability of satisfying the B\"uchi condition into two steps. The first step is to compute the maximal qualitative winning set, from where the B\"uchi condition can be enforced with probability one. The second step is to find the maximal probability of reaching the already computed qualitative winning set. In contrast with finite-state models, we show that such a computation only gives a lower bound on the maximal probability where the gap can be non-zero. In this paper we focus on approximating the qualitative winning set, while pointing out that the existing approaches for unbounded reachability computation can solve the second step. We provide an abstraction-based technique to approximate the qualitative winning set by simultaneously using an over- and under-approximation of the probabilistic transition relation. Since we are interested in qualitative properties, the abstraction is non-probabilistic; instead, the probabilistic transitions are assumed to be under the control of a (fair) adversary. Thus, we reduce the original policy synthesis problem to a B\"uchi game under a fairness assumption and characterize upper and lower bounds on winning sets as nested fixed point expressions in the μ\mu-calculus. This characterization immediately provides a symbolic algorithm scheme. Further, a winning strategy computed on the abstract game can be refined to a policy on the controlled Markov process. We describe a concrete abstraction procedure and demonstrate our algorithm on two case studies

    A note on the policy iteration algorithm for discounted Markov decision processes for a class of semicontinuous models

    Full text link
    The standard version of the policy iteration (PI) algorithm fails for semicontinuous models, that is, for models with lower semicontinuous one-step costs and weakly continuous transition law. This is due to the lack of continuity properties of the discounted cost for stationary policies, thus appearing a measurability problem in the improvement step. The present work proposes an alternative version of PI algorithm which performs an smoothing step to avoid the measurability problem. Assuming that the model satisfies a Lyapunov growth conditions and also some standard continuity-compactness properties, it is shown the linear convergence of the policy iteration functions to the optimal value function. Strengthening the continuity conditions, in a second result, it is shown that among the improvement policies there is one with the best possible improvement and whose cost function is continuous.Comment: Fourteen pages page

    Dynamic Programming for Positive Linear Systems with Linear Costs

    Full text link
    Recent work by Rantzer [Ran22] formulated a class of optimal control problems involving positive linear systems, linear stage costs, and linear constraints. It was shown that the associated Bellman's equation can be characterized by a finite-dimensional nonlinear equation, which is solved by linear programming. In this work, we report complementary theories for the same class of problems. In particular, we provide conditions under which the solution is unique, investigate properties of the optimal policy, study the convergence of value iteration, policy iteration, and optimistic policy iteration applied to such problems, and analyze the boundedness of the solution to the associated linear program. Apart from a form of the Frobenius-Perron theorem, the majority of our results are built upon generic dynamic programming theory applicable to problems involving nonnegative stage costs

    Control of Finite-State, Finite Memory Stochastic Systems

    Get PDF
    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems
    • …
    corecore