21,063 research outputs found

    Multicast Network Design Game on a Ring

    Full text link
    In this paper we study quality measures of different solution concepts for the multicast network design game on a ring topology. We recall from the literature a lower bound of 4/3 and prove a matching upper bound for the price of stability, which is the ratio of the social costs of a best Nash equilibrium and of a general optimum. Therefore, we answer an open question posed by Fanelli et al. in [12]. We prove an upper bound of 2 for the ratio of the costs of a potential optimizer and of an optimum, provide a construction of a lower bound, and give a computer-assisted argument that it reaches 22 for any precision. We then turn our attention to players arriving one by one and playing myopically their best response. We provide matching lower and upper bounds of 2 for the myopic sequential price of anarchy (achieved for a worst-case order of the arrival of the players). We then initiate the study of myopic sequential price of stability and for the multicast game on the ring we construct a lower bound of 4/3, and provide an upper bound of 26/19. To the end, we conjecture and argue that the right answer is 4/3.Comment: 12 pages, 4 figure

    Collocation Games and Their Application to Distributed Resource Management

    Full text link
    We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIAS–Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnología "Francisco José de Caldas

    Marginal Abatement Cost Curves in General Equilibrium: The Influence of World Energy Prices

    Get PDF
    Marginal abatement cost curves (MACCs) are one of the favorite instruments to analyze the impacts of the implementation of the Kyoto Protocol and emissions trading. As shown in this paper one important factor that influences MACCs are energy prices. This leads to the question of how to define MACCs in a general equilibrium context where the overall abatement level world wide influences energy prices and thus national MACCs. We first discuss the mechanisms theoretically and then use the CGE model DART to quantify the effects. The result is, that changes in energy prices resulting from different world wide abatement levels do indeed affect the national MACCs. Also, we compare different possibilities of defining MACCs - of which some turn out to be robust against changes in energy prices while others vary considerably.Climate change, Marginal abatement cost, Energy price, Computable general equilibrium model

    Towards an understanding of tradeoffs between regional wealth, tightness of a common environmental constraint and the sharing rules

    Get PDF
    Consider a country with two regions that have developed differently so that their current levels of energy efficiency differ. Each region's production involves the emission of pollutants, on which a regulator might impose restrictions. The restrictions can be related to pollution standards that the regulator perceives as binding the whole country (e.g., imposed by international agreements like the Kyoto Protocol). We observe that the pollution standards define a common constraint Upon the joint strategy space of the regions. We propose a game theoretic model with a coupled constraints equilibrium as a solution to the regulator's problem of avoiding excessive pollution. The regulator can direct the regions to implement the solution by using a political pressure, or compel them to employ it by using the coupled constraints' Lagrange multipliers as taxation coefficients. We specify a stylised model of the Belgian regions of Flanders and Wallonia that face a joint constraint, for which the regulator wants to develop a sharing rule. We analytically and numerically analyse the equilibrium regional production levels as a function of the pollution standards and of the sharing rules. We thus provide the regulator with an array of equilibria that he (or she) can select for implementation. For the computational results, we use NIRA, which is a piece of software designed to min-maximise the associated Nikaido-Isoda function.

    Computing Stable Coalitions: Approximation Algorithms for Reward Sharing

    Full text link
    Consider a setting where selfish agents are to be assigned to coalitions or projects from a fixed set P. Each project k is characterized by a valuation function; v_k(S) is the value generated by a set S of agents working on project k. We study the following classic problem in this setting: "how should the agents divide the value that they collectively create?". One traditional approach in cooperative game theory is to study core stability with the implicit assumption that there are infinite copies of one project, and agents can partition themselves into any number of coalitions. In contrast, we consider a model with a finite number of non-identical projects; this makes computing both high-welfare solutions and core payments highly non-trivial. The main contribution of this paper is a black-box mechanism that reduces the problem of computing a near-optimal core stable solution to the purely algorithmic problem of welfare maximization; we apply this to compute an approximately core stable solution that extracts one-fourth of the optimal social welfare for the class of subadditive valuations. We also show much stronger results for several popular sub-classes: anonymous, fractionally subadditive, and submodular valuations, as well as provide new approximation algorithms for welfare maximization with anonymous functions. Finally, we establish a connection between our setting and the well-studied simultaneous auctions with item bidding; we adapt our results to compute approximate pure Nash equilibria for these auctions.Comment: Under Revie

    Uncertainty in Multi-Commodity Routing Networks: When does it help?

    Full text link
    We study the equilibrium behavior in a multi-commodity selfish routing game with many types of uncertain users where each user over- or under-estimates their congestion costs by a multiplicative factor. Surprisingly, we find that uncertainties in different directions have qualitatively distinct impacts on equilibria. Namely, contrary to the usual notion that uncertainty increases inefficiencies, network congestion actually decreases when users over-estimate their costs. On the other hand, under-estimation of costs leads to increased congestion. We apply these results to urban transportation networks, where drivers have different estimates about the cost of congestion. In light of the dynamic pricing policies aimed at tackling congestion, our results indicate that users' perception of these prices can significantly impact the policy's efficacy, and "caution in the face of uncertainty" leads to favorable network conditions.Comment: Currently under revie

    Modeling Oligopolistic Price Adjustment in Micro Level Panel Data

    Get PDF
    Consumer prices in many markets are persistently dispersed both across retail outlets and over time. While the cross sectional distribution of prices is stable, individual stores change their position in the distribution over time. It is a challenge to model oligopolistic price adjustment to capture these features of consumer markets. In belief based models of price adjustment stores react to expected profits. The expectations are based on the observed vector of market prices in the previous periods. In a reinforcement model of price adjustment, if a strategy has proven fruitful in the past, it is apt to be the strategy relied upon at the present. We collect price data on a homogeneous consumer product in Israel. We estimate the structural parameter of the models. We find that the reinforcement model describes the data better than the belief based models. ZUSAMMENFASSUNG - ( Modeling Oligopolistic Price Adjustment in Micro Level Panel Data) Preise fßr viele Konsumgßter sind weit verteilt. Dies gilt sowohl fßr die Verteilung ßber die Zeit als auch fßr die Verteilung zwischen den Verkaufsstellen. Während die Querschnittsverteilung der Preise stabil ist, wechseln die einzelnen Verkaufsstellen ihre Position in der Verteilung ßber die Zeit. Es stellt eine Herausforderung dar, diese Merkmale der Märkte fßr Konsumgßter zu modellieren. Im Vermutungslernen bilden die Verkaufstätten Erwartungen ßber das zukßnftige Preissetzungsverhalten der Konkurrenz. Die Erwartungen basieren auf dem vorherigen Entscheidungsverhalten der Konkurrenz. Im Bekräftigungslernen werden erfolgreiche Strategien gerne wiederholt. Preisdaten eines homogenen Gutes in Israel werden erhoben. Die strukturellen Parameter der Modelle werden geschätzt. Bekräftigungslernen beschreibt das tatsächliche Entscheidungsverhalten besser als Vermutungslernen.Experiments; Information; Learning
    • …
    corecore