4 research outputs found

    On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming

    Get PDF
    This note studies a general nonstationary infinite-horizon optimization problem in discrete time. We allow the state space in each period to be an arbitrary set, and the return function in each period to be unbounded. We do not require discounting, and do not require the constraint correspondence in each period to be nonempty-valued. The objective function is defined as the limit superior or inferior of the finite sums of return functions. We show that the sequence of time-indexed value functions satisfies the Bellman equation if and only if its right-hand side is well defined, i.e., it does not involve -∞+∞.Bellman equation, Dynamic programming, Principle of optimality, Value function

    Pareto optimality in multiobjective Markov control processes

    Get PDF
    This paper studies discrete-time multiobjective Markov control processes (MCPs) on Borel spaces and with unbounded costs. Under mild assumptions, it shows the existence of Pareto optimal control policies, which are also characterized as optimal policies for a certain class of single-objective ( or "scalar") MCPs. A similar result is obtained for strong Pareto optimal policies, which are Pareto optimal policies whose cost vector is the closest, in the Euclidean norm, to the virtual minimum. To obtain these results, the basic idea is to transform the multiobjective MCP into an equivalent multiobjective measure problem (MMP). In addition, MMP is restated as a primal multiobjective linear program and it is shown that solving the scalarized MCPs is in fact the same as solving the dual of MMP. A multiobjective LQ example illustrates the main results

    Fast approximation schemes for multi-criteria combinatorial optimization

    Get PDF
    Cover title.Includes bibliographical references (p. 38-44).by Hershel M. Safer, James B. Orlin
    corecore