3 research outputs found

    Fast two-stage computation of an index policy for multi-armed bandits with setup delays

    Get PDF
    We consider the multi-armed bandit problem with penalties for switching that include setup delays and costs, extending the former results of the author for the special case with no switching delays. A priority index for projects with setup delays that characterizes, in part, optimal policies was introduced by Asawa and Teneketzis in 1996, yet without giving a means of computing it. We present a fast two-stage index computing method, which computes the continuation index (which applies when the project has been set up) in a first stage and certain extra quantities with cubic (arithmetic-operation) complexity in the number of project states and then computes the switching index (which applies when the project is not set up), in a second stage, with quadratic complexity. The approach is based on new methodological advances on restless bandit indexation, which are introduced and deployed herein, being motivated by the limitations of previous results, exploiting the fact that the aforementioned index is the Whittle index of the project in its restless reformulation. A numerical study demonstrates substantial runtime speed-ups of the new two-stage index algorithm versus a general one-stage Whittle index algorithm. The study further gives evidence that, in a multi-project setting, the index policy is consistently nearly optimal

    Coordinated Multi-Agent Patrolling with History-Dependent Cost Rates -- Asymptotically Optimal Policies for Large-Scale Systems

    Full text link
    We study a large-scale patrol problem with history-dependent costs and multi-agent coordination, where we relax the assumptions on the past patrol studies, such as identical agents, submodular reward functions and capabilities of exploring any location at any time. Given the complexity and uncertainty of the practical situations for patrolling, we model the problem as a discrete-time Markov decision process (MDP) that consists of a large number of parallel restless bandit processes and aim to minimize the cumulative patrolling cost over a finite time horizon. The problem exhibits an excessively large size of state space, which increases exponentially in the number of agents and the size of geographical region for patrolling. We extend the Whittle relaxation and Lagrangian dynamic programming (DP) techniques to the patrolling case, where the additional, non-trivial constraints used to track the trajectories of all the agents are inevitable and significantly complicate the analysis. The past results cannot ensure the existence of patrol policies with theoretically bounded performance degradation. We propose a patrol policy applicable and scalable to the above mentioned large, complex problem. By invoking Freidlin's theorem, we prove that the performance deviation between the proposed policy and optimality diminishes exponentially in the problem size.Comment: 37 pages, 4 figure
    corecore