3 research outputs found

    Computing the smallest fixed point of order-preserving nonexpansive mappings arising in positive stochastic games and static analysis of programs

    Full text link
    The problem of computing the smallest fixed point of an order-preserving map arises in the study of zero-sum positive stochastic games. It also arises in static analysis of programs by abstract interpretation. In this context, the discount rate may be negative. We characterize the minimality of a fixed point in terms of the nonlinear spectral radius of a certain semidifferential. We apply this characterization to design a policy iteration algorithm, which applies to the case of finite state and action spaces. The algorithm returns a locally minimal fixed point, which turns out to be globally minimal when the discount rate is nonnegative.Comment: 26 pages, 3 figures. We add new results, improvements and two examples of positive stochastic games. Note that an initial version of the paper has appeared in the proceedings of the Eighteenth International Symposium on Mathematical Theory of Networks and Systems (MTNS2008), Blacksburg, Virginia, July 200

    Polynomial Time Algorithms for Branching Markov Decision Processes and Probabilistic Min(Max) Polynomial Bellman Equations

    Get PDF
    We show that one can approximate the least fixed point solution for a multivariate system of monotone probabilistic max(min) polynomial equations, referred to as maxPPSs (and minPPSs, respectively), in time polynomial in both the encoding size of the system of equations and in log(1/epsilon), where epsilon > 0 is the desired additive error bound of the solution. (The model of computation is the standard Turing machine model.) We establish this result using a generalization of Newton's method which applies to maxPPSs and minPPSs, even though the underlying functions are only piecewise-differentiable. This generalizes our recent work which provided a P-time algorithm for purely probabilistic PPSs. These equations form the Bellman optimality equations for several important classes of infinite-state Markov Decision Processes (MDPs). Thus, as a corollary, we obtain the first polynomial time algorithms for computing to within arbitrary desired precision the optimal value vector for several classes of infinite-state MDPs which arise as extensions of classic, and heavily studied, purely stochastic processes. These include both the problem of maximizing and mininizing the termination (extinction) probability of multi-type branching MDPs, stochastic context-free MDPs, and 1-exit Recursive MDPs. Furthermore, we also show that we can compute in P-time an epsilon-optimal policy for both maximizing and minimizing branching, context-free, and 1-exit-Recursive MDPs, for any given desired epsilon > 0. This is despite the fact that actually computing optimal strategies is Sqrt-Sum-hard and PosSLP-hard in this setting. We also derive, as an easy consequence of these results, an FNP upper bound on the complexity of computing the value (within arbitrary desired precision) of branching simple stochastic games (BSSGs)

    Approximative Methods for Monotone Systems of min-max-Polynomial Equations

    No full text
    A monotone system of min-max-polynomial equations (min-max-MSPE) over the variables X1,..., Xn has for every i exactly one equation of the form Xi = fi(X1,..., Xn) where each fi(X1,..., Xn) is an expression built up from polynomials with non-negative coefficients, minimum- and maximum-operators. The question of computing least solutions of min-max-MSPEs arises naturally in the analysis of recursive stochastic games [5, 6, 14]. Min-max-MSPEs generalize MSPEs for which convergence speed results of Newton’s method are established in [11, 3]. We present the first methods for approximatively computing least solutions of min-max-MSPEs which converge at least linearly. Whereas the first one converges faster, a single step of the second method is cheaper. Furthermore, we compute ǫ-optimal positional strategies for the player who wants to maximize the outcome in a recursive stochastic game
    corecore