8 research outputs found

    Deterministic Discrete Dynamic Programming with Discount Factor Greater than One: Structure of Optimal Policies

    No full text
    The paper considers the deterministic dynamic programming model with discount factor greater than one. Possible applications are discussed. After the introduction of a suitable optimization criterion, it is shown that stationary policies are not necessarily optimal and that optimal finite horizon policies do not necessarily converge to an optimal infinite horizon policy. These difficulties are circumvented by the use of a special method, called asymptotic analysis, that allows for inductive arguments on finite horizon models. Asymptotic analysis yields the structure of optimal policies. An optimal policy will usually belong to a special class of history remembering policies.

    An Extension of the Bierman-Hausman Model for Credit Granting

    No full text
    This paper extends the Bierman-Hausman credit-granting model by proving that although the distribution on the probability of collection in the case of failure is no longer a conjugate Beta, it remains tractable; the model therefore need no longer be terminated in the case of failure. Further theorems on the structure of optimal policies, which result in a considerable reduction of the dynamic programming state space, are then presented, and the paper concludes with a discussion of extensions and applications.

    Networks with Gains in Discrete Dynamic Programming

    No full text
    The simplex method is specialized for a special class of networks with gains arising in discounted deterministic Markov decision models.
    corecore