158 research outputs found

    Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms

    Full text link
    We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitz function of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitz bandits, we derive asymptotic problem specific lower bounds for the regret satisfied by any algorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitz structure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptotic regret matches the lower bound. The regret analysis of our algorithms relies on a new concentration inequality for weighted sums of KL divergences between the empirical distributions of rewards and their true distributions. For continuous Lipschitz bandits, we propose to first discretize the action space, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently. This approach is shown, through numerical experiments, to significantly outperform existing algorithms that directly deal with the continuous set of arms. Finally the results and algorithms are extended to contextual bandits with similarities.Comment: COLT 201

    Restless bandit marginal productivity indices I: singleproject case and optimal control of a make-to-stock M/G/1 queue

    Get PDF
    This paper develops a framework based on convex optimization and economic ideas to formulate and solve by an index policy the problem of optimal dynamic effort allocation to a generic discrete-state restless bandit (i.e. binary-action: work/rest) project, elucidating a host of issues raised by Whittle (1988)Žs seminal work on the topic. Our contributions include: (i) a unifying definition of a projectŽs marginal productivity index (MPI), characterizing optimal policies; (ii) a complete characterization of indexability (existence of the MPI) as satisfaction by the project of the law of diminishing returns (to effort); (iii) sufficient indexability conditions based on partial conservation laws (PCLs), extending previous results of the author from the finite to the countable state case; (iv) application to a semi-Markov project, including a new MPI for a mixed longrun-average (LRA)/ bias criterion, which exists in relevant queueing control models where the index proposed by Whittle (1988) does not; and (v) optimal MPI policies for service-controlled make-to-order (MTO) and make-to-stock (MTS) M/G/1 queues with convex back order and stock holding cost rates, under discounted and LRA criteria

    Two-stage index computation for bandits with switching penalties I : switching costs

    Get PDF
    This paper addresses the multi-armed bandit problem with switching costs. Asawa and Teneketzis (1996) introduced an index that partly characterizes optimal policies, attaching to each bandit state a "continuation index" (its Gittins index) and a "switching index". They proposed to jointly compute both as the Gittins index of a bandit having 2n states — when the original bandit has n states — which results in an eight-fold increase in O(n^3) arithmetic operations relative to those to compute the continuation index alone. This paper presents a more efficient, decoupled computation method, which in a first stage computes the continuation index and then, in a second stage, computes the switching index an order of magnitude faster in at most n^2+O(n) arithmetic operations. The paper exploits the fact that the Asawa and Teneketzis index is the Whittle, or marginal productivity, index of a classic bandit with switching costs in its restless reformulation, by deploying work-reward analysis and PCL-indexability methods introduced by the author. A computational study demonstrates the dramatic runtime savings achieved by the new algorithm, the near-optimality of the index policy, and its substantial gains against the benchmark Gittins index policy across a wide range of instances

    Restless Bandits, Linear Programming Relaxations and a Primal-Dual Heuristic

    Get PDF
    We propose a mathematical programming approach for the classical PSPACE - hard problem of n restless bandits in stochastic optimization. We introduce a series of n increasingly stronger linear programming relaxations, the last of which is exact and corresponds to the formulation of the problem as a Markov decision process that has exponential size, while other relaxations provide bounds and are efficiently solvable. We also propose a heuristic for solving the problem that naturally arises from the first of these relaxations and uses indices that are computed through optimal dual variables from the first relaxation. In this way we propose a policy and a suboptimality guarantee. We report computational results that suggest that the value of the proposed heuristic policy is extremely close to the optimal value. Moreover, the second order relaxation provides strong bounds for the optimal solution value

    Dynamic priority allocation via restless bandit marginal productivity indices

    Full text link
    This paper surveys recent work by the author on the theoretical and algorithmic aspects of restless bandit indexation as well as on its application to a variety of problems involving the dynamic allocation of priority to multiple stochastic projects. The main aim is to present ideas and methods in an accessible form that can be of use to researchers addressing problems of such a kind. Besides building on the rich literature on bandit problems, our approach draws on ideas from linear programming, economics, and multi-objective optimization. In particular, it was motivated to address issues raised in the seminal work of Whittle (Restless bandits: activity allocation in a changing world. In: Gani J. (ed.) A Celebration of Applied Probability, J. Appl. Probab., vol. 25A, Applied Probability Trust, Sheffield, pp. 287-298, 1988) where he introduced the index for restless bandits that is the starting point of this work. Such an index, along with previously proposed indices and more recent extensions, is shown to be unified through the intuitive concept of ``marginal productivity index'' (MPI), which measures the marginal productivity of work on a project at each of its states. In a multi-project setting, MPI policies are economically sound, as they dynamically allocate higher priority to those projects where work appears to be currently more productive. Besides being tractable and widely applicable, a growing body of computational evidence indicates that such index policies typically achieve a near-optimal performance and substantially outperform benchmark policies derived from conventional approaches.Comment: 7 figure

    RESTLESS BANDIT MARGINAL PRODUCTIVITY INDICES I: SINGLEPROJECT CASE AND OPTIMAL CONTROL OF A MAKE-TO-STOCK M/G/1 QUEUE

    Get PDF
    This paper develops a framework based on convex optimization and economic ideas to formulate and solve by an index policy the problem of optimal dynamic effort allocation to a generic discrete-state restless bandit (i.e. binary-action: work/rest) project, elucidating a host of issues raised by Whittle (1988)´s seminal work on the topic. Our contributions include: (i) a unifying definition of a project´s marginal productivity index (MPI), characterizing optimal policies; (ii) a complete characterization of indexability (existence of the MPI) as satisfaction by the project of the law of diminishing returns (to effort); (iii) sufficient indexability conditions based on partial conservation laws (PCLs), extending previous results of the author from the finite to the countable state case; (iv) application to a semi-Markov project, including a new MPI for a mixed longrun-average (LRA)/ bias criterion, which exists in relevant queueing control models where the index proposed by Whittle (1988) does not; and (v) optimal MPI policies for service-controlled make-to-order (MTO) and make-to-stock (MTS) M/G/1 queues with convex back order and stock holding cost rates, under discounted and LRA criteria.
    corecore