9,072 research outputs found

    A characterization of 2-player mechanisms for scheduling

    Full text link
    We study the mechanism design problem of scheduling unrelated machines and we completely characterize the decisive truthful mechanisms for two players when the domain contains both positive and negative values. We show that the class of truthful mechanisms is very limited: A decisive truthful mechanism partitions the tasks into groups so that the tasks in each group are allocated independently of the other groups. Tasks in a group of size at least two are allocated by an affine minimizer and tasks in singleton groups by a task-independent mechanism. This characterization is about all truthful mechanisms, including those with unbounded approximation ratio. A direct consequence of this approach is that the approximation ratio of mechanisms for two players is 2, even for two tasks. In fact, it follows that for two players, VCG is the unique algorithm with optimal approximation 2. This characterization provides some support that any decisive truthful mechanism (for 3 or more players) partitions the tasks into groups some of which are allocated by affine minimizers, while the rest are allocated by a threshold mechanism (in which a task is allocated to a player when it is below a threshold value which depends only on the values of the other players). We also show here that the class of threshold mechanisms is identical to the class of additive mechanisms.Comment: 20 pages, 4 figures, ESA'0

    Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences

    Full text link
    We study mechanism design problems in the {\em ordinal setting} wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite~\cite{Gibbard73,Satterthwaite75} force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose {\em rank approximation} as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and {\em lex-truthfulness} as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms {\em circumventing classical impossibility results}. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as {\em (one-sided) matching markets}, and its generalizations, {\em matroid} and {\em scheduling} markets.Comment: Some typos correcte

    Interdependent Scheduling Games

    Get PDF
    We propose a model of interdependent scheduling games in which each player controls a set of services that they schedule independently. A player is free to schedule his own services at any time; however, each of these services only begins to accrue reward for the player when all predecessor services, which may or may not be controlled by the same player, have been activated. This model, where players have interdependent services, is motivated by the problems faced in planning and coordinating large-scale infrastructures, e.g., restoring electricity and gas to residents after a natural disaster or providing medical care in a crisis when different agencies are responsible for the delivery of staff, equipment, and medicine. We undertake a game-theoretic analysis of this setting and in particular consider the issues of welfare maximization, computing best responses, Nash dynamics, and existence and computation of Nash equilibria.Comment: Accepted to IJCAI 201

    Designing Network Protocols for Good Equilibria

    Get PDF
    Designing and deploying a network protocol determines the rules by which end users interact with each other and with the network. We consider the problem of designing a protocol to optimize the equilibrium behavior of a network with selfish users. We consider network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of an edge cost-sharing protocol. Previous research focused on the Shapley protocol, in which the cost of each edge is shared equally among its users. We systematically study the design of optimal cost-sharing protocols for undirected and directed graphs, single-sink and multicommodity networks, and different measures of the inefficiency of equilibria. Our primary technical tool is a precise characterization of the cost-sharing protocols that induce only network games with pure-strategy Nash equilibria. We use this characterization to prove, among other results, that the Shapley protocol is optimal in directed graphs and that simple priority protocols are essentially optimal in undirected graphs

    Anticipatory Buffer Control and Quality Selection for Wireless Video Streaming

    Full text link
    Video streaming is in high demand by mobile users, as recent studies indicate. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast to that, good channel conditions cause common greedy buffering schemes to pile up very long buffers. Such over-buffering wastes expensive wireless channel capacity. To keep buffering in balance, we employ a novel approach. Assuming that we can predict data rates, we plan the quality and download time of the video segments ahead. This anticipatory scheduling avoids buffer underruns by downloading a large number of segments before a channel outage occurs, without wasting wireless capacity by excessive buffering. We formalize this approach as an optimization problem and derive practical heuristics for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality

    Designing Networks with Good Equilibria under Uncertainty

    Get PDF
    We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of k terminal vertices or players need to establish connectivity with the root. The social optimum is the Minimum Steiner Tree. We are interested in situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying metric but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the designer's goal is to choose a single, universal protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is: to what extent can prior knowledge of the underlying metric help in the design? We first demonstrate that there exist graphs (outerplanar) where knowledge of the underlying metric can dramatically improve the performance of good network design. Then, in our main technical result, we show that there exist graph metrics, for which knowing the underlying metric does not help and any universal protocol has PoA of Ω(logk)\Omega(\log k), which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey Theory in high dimensional hypercubes. Then we switch to the stochastic model, where each player is independently activated. We show that there exists a randomized ordered protocol that achieves constant PoA. By using standard derandomization techniques, we produce a deterministic ordered protocol with constant PoA.Comment: This version has additional results about stochastic inpu

    Average-case Approximation Ratio of Scheduling without Payments

    Full text link
    Apart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design -- the scheduling problem [Nisan and Ronen 1999]. One version of this problem which includes a verification component is studied by [Koutsoupias 2014]. It was shown that the problem has a tight approximation ratio bound of (n+1)/2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given in [Koutsoupias 2014] is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio, and indicates that the optimal mechanism for the problem actually works well on average, although in the worst-case the expected cost of the mechanism is Theta(n) times that of the optimal cost
    corecore