61 research outputs found

    Three Management Policies for a Resource with Partition Constraints

    Get PDF
    This paper investigates effective control policies for a bufferless resource that operates under nonhomogeneous dynamic demand. The demand consists of requests of two different types, categorized by the number of resource units required for service. Management of the resource is subject t

    Analysis of some simple policies for dynamic resource allocation

    Get PDF
    Complexity-performance trade-offs are investigated for dynamic resource allocation in load sharing networks with Erlang-type statistics. The emphasis is on the performance of simple allocation strategies that can be implemented on-line. The resource allocation problem is formulated as a stochastic optimal control problem. Variants of a simple least load routing policy are shown to lead to a fluid type limit and to be asymptotically optimal. Either finite capacity constraints or migration of load can be incorporated into the setup.Three policies, namely optimal repacking, least load routing, and Bernoulli splitting, are examined in more detail. Large deviations principles are established for the three policies in a simple network of three consumer types and two resource locations and are used to identify the network overflow exponents. The overflow exponents for networks with arbitrary topologies are identified for optimal repacking and Bernoulli splitting policies, and conjectured for the least load routing policy.Finally, a process-level large deviations principle is established for Markov processes in the Euclidean space with a discontinuity in the transition mechanism along a hyperplane. The transition mechanism of the process is assumed to be continuous on one closed half-space and also continuous on the complementary open half-space. A similar result was recently obtained by Dupuis and Ellis for lattice-valued Markov processes satisfying a mild communication/controllability condition. The proof presented here relies on the work of Blinovskii and Dobrushin, which in turn is based on an earlier work of Dupuis and Ellis.U of I OnlyETDs are only available to UIUC Users without author permissio

    A note on adjusted replicator dynamics in iterated games

    No full text
    We establish how a rich collection of evolutionary games can arise as asymptotically exact descriptions of player strategies in iterated games. We consider arbitrary normal-form games that are iteratively played by players that observe their own payoffs after each round. Each player's strategy is assumed to depend only past actions and past payoffs of the player. We study a class of autonomous reinforcement-learning rules for such players and show that variants of the adjusted replicator dynamics are asymptotically exact approximations of player strategies for small values of a step-size parameter adopted in learning. We also obtain a convergence result that identifies when a stable equilibrium of the limit dynamics characterizes equilibrium behavior of player strategies.Adjusted replicator dynamics Reinforcement learning Stochastic approximations
    corecore