228 research outputs found

    General limit value in Dynamic Programming

    Get PDF
    We consider a dynamic programming problem with arbitrary state space and bounded rewards. Is it possible to define in an unique way a limit value for the problem, where the "patience" of the decision-maker tends to infinity ? We consider, for each evaluation θ\theta (a probability distribution over positive integers) the value function vθv_{\theta} of the problem where the weight of any stage tt is given by θt\theta_t, and we investigate the uniform convergence of a sequence (vθk)k(v_{\theta^k})_k when the "impatience" of the evaluations vanishes, in the sense that ∑t∣θtk−θt+1k∣→k→∞0\sum_{t} |\theta^k_{t}-\theta^k_{t+1}| \rightarrow_{k \to \infty} 0. We prove that this uniform convergence happens if and only if the metric space vθk,k≥1{v_{\theta^k}, k\geq 1} is totally bounded. Moreover there exists a particular function v∗v^*, independent of the particular chosen sequence (θk)k({\theta^k})_k, such that any limit point of such sequence of value functions is precisely v∗v^*. Consequently, while speaking of uniform convergence of the value functions, v∗v^* may be considered as the unique possible limit when the patience of the decision-maker tends to infinity. The result applies in particular to discounted payoffs when the discount factor vanishes, as well as to average payoffs where the number of stages goes to infinity, and also to models with stochastic transitions. We present tractable corollaries, and we discuss counterexamples and a conjecture

    The value of Repeated Games with an informed controller

    Full text link
    We consider the general model of zero-sum repeated games (or stochastic games with signals), and assume that one of the players is fully informed and controls the transitions of the state variable. We prove the existence of the uniform value, generalizing several results of the literature. A preliminary existence result is obtained for a certain class of stochastic games played with pure strategies

    A distance for probability spaces, and long-term values in Markov Decision Processes and Repeated Games

    Full text link
    Given a finite set KK, we denote by X=Δ(K)X=\Delta(K) the set of probabilities on KK and by Z=Δf(X)Z=\Delta_f(X) the set of Borel probabilities on XX with finite support. Studying a Markov Decision Process with partial information on KK naturally leads to a Markov Decision Process with full information on XX. We introduce a new metric d∗d_* on ZZ such that the transitions become 1-Lipschitz from (X,∥.∥1)(X, \|.\|_1) to (Z,d∗)(Z,d_*). In the first part of the article, we define and prove several properties of the metric d∗d_*. Especially, d∗d_* satisfies a Kantorovich-Rubinstein type duality formula and can be characterized by using disintegrations. In the second part, we characterize the limit values in several classes of "compact non expansive" Markov Decision Processes. In particular we use the metric d∗d_* to characterize the limit value in Partial Observation MDP with finitely many states and in Repeated Games with an informed controller with finite sets of states and actions. Moreover in each case we can prove the existence of a generalized notion of uniform value where we consider not only the Ces\`aro mean when the number of stages is large enough but any evaluation function θ∈Δ(N∗)\theta \in \Delta(\N^*) when the impatience I(θ)=∑t≥1∣θt+1−θt∣I(\theta)=\sum_{t\geq 1} |\theta_{t+1}-\theta_t| is small enough
    • …
    corecore