5,774 research outputs found

    A note on the data-driven capacity of P2P networks

    Get PDF
    We consider two capacity problems in P2P networks. In the first one, the nodes have an infinite amount of data to send and the goal is to optimally allocate their uplink bandwidths such that the demands of every peer in terms of receiving data rate are met. We solve this problem through a mapping from a node-weighted graph featuring two labels per node to a max flow problem on an edge-weighted bipartite graph. In the second problem under consideration, the resource allocation is driven by the availability of the data resource that the peers are interested in sharing. That is a node cannot allocate its uplink resources unless it has data to transmit first. The problem of uplink bandwidth allocation is then equivalent to constructing a set of directed trees in the overlay such that the number of nodes receiving the data is maximized while the uplink capacities of the peers are not exceeded. We show that the problem is NP-complete, and provide a linear programming decomposition decoupling it into a master problem and multiple slave subproblems that can be resolved in polynomial time. We also design a heuristic algorithm in order to compute a suboptimal solution in a reasonable time. This algorithm requires only a local knowledge from nodes, so it should support distributed implementations. We analyze both problems through a series of simulation experiments featuring different network sizes and network densities. On large networks, we compare our heuristic and its variants with a genetic algorithm and show that our heuristic computes the better resource allocation. On smaller networks, we contrast these performances to that of the exact algorithm and show that resource allocation fulfilling a large part of the peer can be found, even for hard configuration where no resources are in excess.Comment: 10 pages, technical report assisting a submissio

    CliqueStream: an efficient and fault-resilient live streaming network on a clustered peer-to-peer overlay

    Full text link
    Several overlay-based live multimedia streaming platforms have been proposed in the recent peer-to-peer streaming literature. In most of the cases, the overlay neighbors are chosen randomly for robustness of the overlay. However, this causes nodes that are distant in terms of proximity in the underlying physical network to become neighbors, and thus data travels unnecessary distances before reaching the destination. For efficiency of bulk data transmission like multimedia streaming, the overlay neighborhood should resemble the proximity in the underlying network. In this paper, we exploit the proximity and redundancy properties of a recently proposed clique-based clustered overlay network, named eQuus, to build efficient as well as robust overlays for multimedia stream dissemination. To combine the efficiency of content pushing over tree structured overlays and the robustness of data-driven mesh overlays, higher capacity stable nodes are organized in tree structure to carry the long haul traffic and less stable nodes with intermittent presence are organized in localized meshes. The overlay construction and fault-recovery procedures are explained in details. Simulation study demonstrates the good locality properties of the platform. The outage time and control overhead induced by the failure recovery mechanism are minimal as demonstrated by the analysis.Comment: 10 page

    Overlay networks for smart grids

    Get PDF

    Collocation Games and Their Application to Distributed Resource Management

    Full text link
    We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIAS–Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnología "Francisco José de Caldas

    QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts

    Full text link
    Large inter-datacenter transfers are crucial for cloud service efficiency and are increasingly used by organizations that have dedicated wide area networks between datacenters. A recent work uses multicast forwarding trees to reduce the bandwidth needs and improve completion times of point-to-multipoint transfers. Using a single forwarding tree per transfer, however, leads to poor performance because the slowest receiver dictates the completion time for all receivers. Using multiple forwarding trees per transfer alleviates this concern--the average receiver could finish early; however, if done naively, bandwidth usage would also increase and it is apriori unclear how best to partition receivers, how to construct the multiple trees and how to determine the rate and schedule of flows on these trees. This paper presents QuickCast, a first solution to these problems. Using simulations on real-world network topologies, we see that QuickCast can speed up the average receiver's completion time by as much as 10Ă—10\times while only using 1.04Ă—1.04\times more bandwidth; further, the completion time for all receivers also improves by as much as 1.6Ă—1.6\times faster at high loads.Comment: [Extended Version] Accepted for presentation in IEEE INFOCOM 2018, Honolulu, H
    • …
    corecore