504 research outputs found

    Facility Location in Evolving Metrics

    Get PDF
    Understanding the dynamics of evolving social or infrastructure networks is a challenge in applied areas such as epidemiology, viral marketing, or urban planning. During the past decade, data has been collected on such networks but has yet to be fully analyzed. We propose to use information on the dynamics of the data to find stable partitions of the network into groups. For that purpose, we introduce a time-dependent, dynamic version of the facility location problem, that includes a switching cost when a client's assignment changes from one facility to another. This might provide a better representation of an evolving network, emphasizing the abrupt change of relationships between subjects rather than the continuous evolution of the underlying network. We show that in realistic examples this model yields indeed better fitting solutions than optimizing every snapshot independently. We present an O(lognT)O(\log nT)-approximation algorithm and a matching hardness result, where nn is the number of clients and TT the number of time steps. We also give an other algorithms with approximation ratio O(lognT)O(\log nT) for the variant where one pays at each time step (leasing) for each open facility

    Enumerating Subgraph Instances Using Map-Reduce

    Full text link
    The theme of this paper is how to find all instances of a given "sample" graph in a larger "data graph," using a single round of map-reduce. For the simplest sample graph, the triangle, we improve upon the best known such algorithm. We then examine the general case, considering both the communication cost between mappers and reducers and the total computation cost at the reducers. To minimize communication cost, we exploit the techniques of (Afrati and Ullman, TKDE 2011)for computing multiway joins (evaluating conjunctive queries) in a single map-reduce round. Several methods are shown for translating sample graphs into a union of conjunctive queries with as few queries as possible. We also address the matter of optimizing computation cost. Many serial algorithms are shown to be "convertible," in the sense that it is possible to partition the data graph, explore each partition in a separate reducer, and have the total computation cost at the reducers be of the same order as the computation cost of the serial algorithm.Comment: 37 page

    The Power of Verification for Greedy Mechanism Design

    Get PDF
    Greedy algorithms are known to provide near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders, ignoring incentive compatibility. Borodin and Lucier [5] however proved that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subsets of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary for the greedy algorithm of [20], which is 2-approximate for submodular CAs, to become truthful with money in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph

    Scheduling MapReduce Jobs under Multi-Round Precedences

    Full text link
    We consider non-preemptive scheduling of MapReduce jobs with multiple tasks in the practical scenario where each job requires several map-reduce rounds. We seek to minimize the average weighted completion time and consider scheduling on identical and unrelated parallel processors. For identical processors, we present LP-based O(1)-approximation algorithms. For unrelated processors, the approximation ratio naturally depends on the maximum number of rounds of any job. Since the number of rounds per job in typical MapReduce algorithms is a small constant, our scheduling algorithms achieve a small approximation ratio in practice. For the single-round case, we substantially improve on previously best known approximation guarantees for both identical and unrelated processors. Moreover, we conduct an experimental analysis and compare the performance of our algorithms against a fast heuristic and a lower bound on the optimal solution, thus demonstrating their promising practical performance

    Bottleneck Routing Games with Low Price of Anarchy

    Full text link
    We study {\em bottleneck routing games} where the social cost is determined by the worst congestion on any edge in the network. In the literature, bottleneck games assume player utility costs determined by the worst congested edge in their paths. However, the Nash equilibria of such games are inefficient since the price of anarchy can be very high and proportional to the size of the network. In order to obtain smaller price of anarchy we introduce {\em exponential bottleneck games} where the utility costs of the players are exponential functions of their congestions. We find that exponential bottleneck games are very efficient and give a poly-log bound on the price of anarchy: O(logLlogE)O(\log L \cdot \log |E|), where LL is the largest path length in the players' strategy sets and EE is the set of edges in the graph. By adjusting the exponential utility costs with a logarithm we obtain games whose player costs are almost identical to those in regular bottleneck games, and at the same time have the good price of anarchy of exponential games.Comment: 12 page

    Dynamics of ripple formation on silicon surfaces by ultrashort laser pulses in sub-ablation conditions

    Full text link
    An investigation of ultrashort pulsed laser induced surface modification due to conditions that result in a superheated melted liquid layer and material evaporation are considered. To describe the surface modification occurring after cooling and resolidification of the melted layer and understand the underlying physical fundamental mechanisms, a unified model is presented to account for crater and subwavelength ripple formation based on a synergy of electron excitation and capillary waves solidification. The proposed theoretical framework aims to address the laser-material interaction in sub-ablation conditions and thus minimal mass removal in combination with a hydrodynamics-based scenario of the crater creation and ripple formation following surface irradiation with single and multiple pulses, respectively. The development of the periodic structures is attributed to the interference of the incident wave with a surface plasmon wave. Details of the surface morphology attained are elaborated as a function of the imposed conditions and results are tested against experimental data

    The sequential price of anarchy for atomic congestion games

    Get PDF
    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, here we consider games where players choose their actions sequentially. The sequential price of anarchy, recently introduced by Paes Leme, Syrgkanis, and Tardos then relates the quality of any subgame perfect equilibrium to the quality of a global optimum. The effect of sequential decision making on the quality of equilibria, however, depends on the specific game under consideration.\ud Here we analyze the sequential price of anarchy for atomic congestion games with affine cost functions. We derive several lower and upper bounds, showing that sequential decisions mitigate the worst case outcomes known for the classical price of anarchy. Next to tight bounds on the sequential price of anarchy, a methodological contribution of our work is, among other things, a "factor revealing" integer linear programming approach that we use to solve the case of three players

    Resource Competition on Integral Polymatroids

    Full text link
    We study competitive resource allocation problems in which players distribute their demands integrally on a set of resources subject to player-specific submodular capacity constraints. Each player has to pay for each unit of demand a cost that is a nondecreasing and convex function of the total allocation of that resource. This general model of resource allocation generalizes both singleton congestion games with integer-splittable demands and matroid congestion games with player-specific costs. As our main result, we show that in such general resource allocation problems a pure Nash equilibrium is guaranteed to exist by giving a pseudo-polynomial algorithm computing a pure Nash equilibrium.Comment: 17 page

    A Survey on Approximation Mechanism Design without Money for Facility Games

    Full text link
    In a facility game one or more facilities are placed in a metric space to serve a set of selfish agents whose addresses are their private information. In a classical facility game, each agent wants to be as close to a facility as possible, and the cost of an agent can be defined as the distance between her location and the closest facility. In an obnoxious facility game, each agent wants to be far away from all facilities, and her utility is the distance from her location to the facility set. The objective of each agent is to minimize her cost or maximize her utility. An agent may lie if, by doing so, more benefit can be obtained. We are interested in social choice mechanisms that do not utilize payments. The game designer aims at a mechanism that is strategy-proof, in the sense that any agent cannot benefit by misreporting her address, or, even better, group strategy-proof, in the sense that any coalition of agents cannot all benefit by lying. Meanwhile, it is desirable to have the mechanism to be approximately optimal with respect to a chosen objective function. Several models for such approximation mechanism design without money for facility games have been proposed. In this paper we briefly review these models and related results for both deterministic and randomized mechanisms, and meanwhile we present a general framework for approximation mechanism design without money for facility games

    Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket

    Full text link
    We study wear-leveling techniques for cuckoo hashing, showing that it is possible to achieve a memory wear bound of loglogn+O(1)\log\log n+O(1) after the insertion of nn items into a table of size CnCn for a suitable constant CC using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically, showing that it significantly improves on the memory wear performance for classic cuckoo hashing and linear probing in practice.Comment: 13 pages, 1 table, 7 figures; to appear at the 13th Symposium on Experimental Algorithms (SEA 2014
    corecore