14 research outputs found

    Cost sharing over combinatorial domains: Complement-free cost functions and beyond

    Get PDF
    We study mechanism design for combinatorial cost sharing models. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently [7]. Still, many questions about the interplay between strategyproofness, cost recovery and economic efficiency remain unanswered. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (which we term trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations (complementary to a certain extent) of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced and O(Hn)-approximate with respect to the social cost. Further, we show that our mechanism is budget-balanced and Hn-approximate if both the valuations and the cost functions are symmetric submodular; given existing impossibility results, this is best possible. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees. In particular, we derive improved mechanisms for fundamental cost sharing problems, including Vertex Cover and Set Cover

    Approximation algorithms for distributed and selfish agents

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2005.Includes bibliographical references (p. 157-165).Many real-world systems involve distributed and selfish agents who optimize their own objective function. In these systems, we need to design efficient mechanisms so that system-wide objective is optimized despite agents acting in their own self interest. In this thesis, we develop approximation algorithms and decentralized mechanisms for various combinatorial optimization problems in such systems. First, we investigate the distributed caching and a general set of assignment problems. We develop an almost tight LP-based ... approximation algorithm and a local search ... approximation algorithm for these problems. We also design efficient decentralized mechanisms for these problems and study the convergence of the corresponding games. In the following chapters, we study the speed of convergence to high quality solutions on (random) best-response paths of players. First, we study the average social value on best response paths in basic-utility, market sharing, and cut games. Then, we introduce the sink equilibrium as a new equilibrium concept. We argue that, unlike Nash equilibria, the selfish behavior of players converges to sink equilibria and all strategic games have a sink equilibrium. To illustrate the use of this new concept, we study the social value of sink equilibria in weighted selfish routing (or weighted congestion) games and valid-utility (or submodular-utility) games. In these games, we bound the average social value on random best-response paths for sink equilibria.. Finally, we study cross-monotonic cost sharings and group-strategyproof mechanisms.(cont.) We study the limitations imposed by the cross-monotonicity property on cost-sharing schemes for several combinatorial optimization games including set cover and metric facility location. We develop a novel technique based on the probabilistic method for proving upper bounds on the budget-balance factor of cross-monotonic cost sharing schemes, deriving tight or nearly-tight bounds for these games. At the end, we extend some of these results to group-strategyproof mechanisms.by Vahab S. Mirrokni.Ph.D

    Cost sharing over combinatorial domains

    Get PDF
    We study the problem of designing cost-sharing mechanisms for combinatorial domains. Suppose that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost-sharing domains until recently [9]. Still, many questions about the interplay between strategyproofness, cost recovery, and economic efficiency remain unanswered.The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (which we term trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations (complementary to a certain extent) of cost functions, which capture the behavior of their average cost-shares.Based on our trace-monotonicity property, we design an Iterative Ascending Cost-Sharing Mechanism, which is applicable to the combinatorial cost-sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced, and O(Hn)-approximate with respect to the social cost. Furthermore, we show that our mechanism is budget-balanced and Hn-approximate if both the valuations and the cost functions are symmetric submodular; given existing impossibility results, this is best possible.Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general, only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees. In particular, we derive improved mechanisms for fundamental cost-sharing problems, including Vertex Cover and Set Cover

    Resource Allocation, and Survivability in Network Virtualization Environments

    Get PDF
    Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time

    Fault-tolerance and load management in a distributed stream processing system

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 187-199).Advances in monitoring technology (e.g., sensors) and an increased demand for online information processing have given rise to a new class of applications that require continuous, low-latency processing of large-volume data streams. These "stream processing applications" arise in many areas such as sensor-based environment monitoring, financial services, network monitoring, and military applications. Because traditional database management systems are ill-suited for high-volume, low-latency stream processing, new systems, called stream processing engines (SPEs), have been developed. Furthermore, because stream processing applications are inherently distributed, and because distribution can improve performance and scalability, researchers have also proposed and developed distributed SPEs. In this dissertation, we address two challenges faced by a distributed SPE: (1) faulttolerant operation in the face of node failures, network failures, and network partitions, and (2) federated load management. For fault-tolerance, we present a replication-based scheme, called Delay, Process, and Correct (DPC), that masks most node and network failures.(cont.) When network partitions occur, DPC addresses the traditional availability-consistency trade-off by maintaining, when possible, a desired availability specified by the application or user, but eventually also delivering the correct results. While maintaining the desired availability bounds, DPC also strives to minimize the number of inaccurate results that must later be corrected. In contrast to previous proposals for fault tolerance in SPEs, DPC simultaneously supports a variety of applications that differ in their preferred trade-off between availability and consistency. For load management, we present a Bounded-Price Mechanism (BPM) that enables autonomous participants to collaboratively handle their load without individually owning the resources necessary for peak operation. BPM is based on contracts that participants negotiate offline. At runtime, participants move load only to partners with whom they have a contract and pay each other the contracted price. We show that BPM provides incentives that foster participation and leads to good system-wide load distribution. In contrast to earlier proposals based on computational economies, BPM is lightweight, enables participants to develop and exploit preferential relationships, and provides stability and predictability.(cont.) Although motivated by stream processing, BPM is general and can be applied to any federated system. We have implemented both schemes in the Borealis distributed stream processing engine. They will be available with the next release of the system.by Magdalena Balazinska.Ph.D

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Group-Strategyproof Cost Sharing for Metric Fault Tolerant Facility Location

    No full text

    Markets, Elections, and Microbes: Data-driven Algorithms from Theory to Practice

    Get PDF
    Many modern problems in algorithms and optimization are driven by data which often carries with it an element of uncertainty. In this work, we conduct an investigation into algorithmic foundations and applications across three main areas. The first area is online matching algorithms for e-commerce applications such as online sales and advertising. The importance of e-commerce in modern business cannot be overstated and even minor algorithmic improvements can have huge impacts. In online matching problems, we generally have a known offline set of goods or advertisements while users arrive online and allocations must be made immediately and irrevocably when a user arrives. However, in the real world, there is also uncertainty about a user's true interests and this can be modeled by considering matching problems in a graph with stochastic edges that only have a probability of existing. These edges can represent the probability of a user purchasing a product or clicking on an ad. Thus, we optimize over data which only provides an estimate of what types of users will arrive and what they will prefer. We survey a broad landscape of problems in this area, gain a deeper understanding of the algorithmic challenges, and present algorithms with improved worst case performance The second area is constrained clustering where we explore classical clustering problems with additional constraints on which data points should be clustered together. Utilizing these constraints is important for many clustering problems because they can be used to ensure fairness, exploit expert advice, or capture natural properties of the data. In simplest case, this can mean some pairs of points have ``must-link'' constraints requiring that that they must be clustered together. Moving into stochastic settings, we can describe more general pairwise constraints such as bounding the probability that two points are separated into different clusters. This lets us introduce a new notion of fairness for clustering and address stochastic problems such as semi-supervised learning with advice from imperfect experts. Here, we introduce new models of constrained clustering including new notions of fairness for clustering applications. Since these problems are NP-hard, we give approximation algorithms and in some cases conduct experiments to explore how the algorithms perform in practice. Finally, we look closely at the particular clustering problem of drawing election districts and show how constraining the clusters based on past voting data can interact with voter incentives. The third area is string algorithms for bioinformatics and metagenomics specifically where the data deluge from next generation sequencing drives the necessity for new algorithms that are both fast and accurate. For metagenomic analysis, we present a tool for clustering a microbial marker gene, the 16S ribosomal RNA gene. On the more theoretical side, we present a succinct application of the Method of the Four Russians to edit distance computation as well as new algorithms and bounds for the maximum duo-preservation string mapping (MPSM) problem
    corecore