15 research outputs found

    The Feedback Arc Set Problem with Triangle Inequality is a Vertex Cover Problem

    Full text link
    We consider the (precedence constrained) Minimum Feedback Arc Set problem with triangle inequalities on the weights, which finds important applications in problems of ranking with inconsistent information. We present a surprising structural insight showing that the problem is a special case of the minimum vertex cover in hypergraphs with edges of size at most 3. This result leads to combinatorial approximation algorithms for the problem and opens the road to studying the problem as a vertex cover problem

    How the structure of precedence constraints may change the complexity class of scheduling problems

    Full text link
    This survey aims at demonstrating that the structure of precedence constraints plays a tremendous role on the complexity of scheduling problems. Indeed many problems can be NP-hard when considering general precedence constraints, while they become polynomially solvable for particular precedence constraints. We also show that there still are many very exciting challenges in this research area

    Finding the bandit in a graph: Sequential search-and-stop

    Get PDF
    We consider the problem where an agent wants to find a hidden object that is randomly located in some vertex of a directed acyclic graph (DAG) according to a fixed but possibly unknown distribution. The agent can only examine vertices whose in-neighbors have already been examined. In this paper, we address a learning setting where we allow the agent to stop before having found the object and restart searching on a new independent instance of the same problem. Our goal is to maximize the total number of hidden objects found given a time budget. The agent can thus skip an instance after realizing that it would spend too much time on it. Our contributions are both to the search theory and multi-armed bandits. If the distribution is known, we provide a quasi-optimal and efficient stationary strategy. If the distribution is unknown, we additionally show how to sequentially approximate it and, at the same time, act near-optimally in order to collect as many hidden objects as possible.Comment: in International Conference on Artificial Intelligence and Statistics (AISTATS 2019), April 2019, Naha, Okinawa, Japa

    Finding the bandit in a graph: Sequential search-and-stop

    Get PDF
    International audienceWe consider the problem where an agent wants to find a hidden object that is randomly located in some vertex of a directed acyclic graph (DAG) according to a fixed but possibly unknown distribution. The agent can only examine vertices whose in-neighbors have already been examined. In this paper, we address a learning setting where we allow the agent to stop before having found the object and restart searching on a new independent instance of the same problem. Our goal is to maximize the total number of hidden objects found given a time budget. The agent can thus skip an instance after realizing that it would spend too much time on it. Our contributions are both to the search theory and multi-armed bandits. If the distribution is known, we provide a quasi-optimal and efficient stationary strategy. If the distribution is unknown, we additionally show how to sequentially approximate it and, at the same time, act near-optimally in order to collect as many hidden objects as possible
    corecore