553 research outputs found

    Achieving the Optimal Steaming Capacity and Delay Using Random Regular Digraphs in P2P Networks

    Full text link
    In earlier work, we showed that it is possible to achieve O(logN)O(\log N) streaming delay with high probability in a peer-to-peer network, where each peer has as little as four neighbors, while achieving any arbitrary fraction of the maximum possible streaming rate. However, the constant in the O(logN)O(log N) delay term becomes rather large as we get closer to the maximum streaming rate. In this paper, we design an alternative pairing and chunk dissemination algorithm that allows us to transmit at the maximum streaming rate while ensuring that all, but a negligible fraction of the peers, receive the data stream with O(logN)O(\log N) delay with high probability. The result is established by examining the properties of graph formed by the union of two or more random 1-regular digraphs, i.e., directed graphs in which each node has an incoming and an outgoing node degree both equal to one

    Methods for improving resilience in communication networks and P2P overlays

    Get PDF
    Resilience to failures and deliberate attacks is becoming an essential requirement in most communication networks today. This also applies to P2P Overlays which on the one hand are created on top of communication infrastructures, and therefore are equally affected by failures of the underlying infrastructure, but which on the other hand introduce new possibilities like the creation of arbitrary links within the overlay. In this article, we present a survey of strategies to improve resilience in communication networks as well as in P2P overlay networks. Furthermore, our intention is to point out differences and similarities in the resilience-enhancing measures for both types of networks. By revising some basic concepts from graph theory, we show that many concepts for communication networks are based on well-known graph-theoretical problems. Especially, some methods for the construction of protection paths in advance of a failure are based on very hard problems, indeed many of them are in NP and can only be solved heuristically or on certain topologies. P2P overlay networks evidently benefit from resilience-enhancing strategies in the underlying communication infrastructure, but beyond that, their specific properties pose the need for more sophisticated mechanisms. The dynamic nature of peers requires to take some precautions, like estimating the reliability of peers, redundantly storing information, and provisioning a reliable routing

    Atum: Scalable Group Communication Using Volatile Groups

    Get PDF
    This paper presents Atum, a group communication middleware for a large, dynamic, and hostile environment. At the heart of Atum lies the novel concept of volatile groups: small, dynamic groups of nodes, each executing a state machine replication protocol, organized in a flexible overlay. Using volatile groups, Atum scatters faulty nodes evenly among groups, and then masks each individual fault inside its group. To broadcast messages among volatile groups, Atum runs a gossip protocol across the overlay. We report on our synchronous and asynchronous (eventually synchronous) implementations of Atum, as well as on three representative applications that we build on top of it: A publish/subscribe platform, a file sharing service, and a data streaming system. We show that (a) Atum can grow at an exponential rate beyond 1000 nodes and disseminate messages in polylogarithmic time (conveying good scalability); (b) it smoothly copes with 18% of nodes churning every minute; and (c) it is impervious to arbitrary faults, suffering no performance decay despite 5.8% Byzantine nodes in a system of 850 nodes

    Fast-Mesh: A Low-Delay High-Bandwidth Mesh for Peer-to-Peer Live Streaming

    Full text link

    Mathematical analysis of scheduling policies in peer-to-peer video streaming networks

    Get PDF
    Las redes de pares son comunidades virtuales autogestionadas, desarrolladas en la capa de aplicación sobre la infraestructura de Internet, donde los usuarios (denominados pares) comparten recursos (ancho de banda, memoria, procesamiento) para alcanzar un fin común. La distribución de video representa la aplicación más desafiante, dadas las limitaciones de ancho de banda. Existen básicamente tres servicios de video. El más simple es la descarga, donde un conjunto de servidores posee el contenido original, y los usuarios deben descargar completamente este contenido previo a su reproducción. Un segundo servicio se denomina video bajo demanda, donde los pares se unen a una red virtual siempre que inicien una solicitud de un contenido de video, e inician una descarga progresiva en línea. El último servicio es video en vivo, donde el contenido de video es generado, distribuido y visualizado simultáneamente. En esta tesis se estudian aspectos de diseño para la distribución de video en vivo y bajo demanda. Se presenta un análisis matemático de estabilidad y capacidad de arquitecturas de distribución bajo demanda híbridas, asistidas por pares. Los pares inician descargas concurrentes de múltiples contenidos, y se desconectan cuando lo desean. Se predice la evolución esperada del sistema asumiendo proceso Poisson de arribos y egresos exponenciales, mediante un modelo determinístico de fluidos. Un sub-modelo de descargas secuenciales (no simultáneas) es globalmente y estructuralmente estable, independientemente de los parámetros de la red. Mediante la Ley de Little se determina el tiempo medio de residencia de usuarios en un sistema bajo demanda secuencial estacionario. Se demuestra teóricamente que la filosofía híbrida de cooperación entre pares siempre desempeña mejor que la tecnología pura basada en cliente-servidor

    Algorithms for interactive, distributed and networked systems

    Get PDF
    In recent years, massive growth in internet usage has spurred the emergence of complex large-scale networking systems to serve growing user bases, bandwidth and computation requirements. For example, data center facilities -- workhorses of today's internet -- have evolved to house upward of several hundreds of thousands of servers; content distribution networks with high capacity and wide coverage have emerged as a de facto content dissemination modality, and peer-to-peer applications with hundreds of thousands of users are increasingly becoming popular. At these scales, it becomes critical to operate at high efficiencies as the price of idling resources can be significant. In particular, the interaction between agents (servers, peers etc.) is a defining factor of efficiency in these systems -- applications are often communication intensive, whereas agents share links of only limited bandwidth. This necessitates the use of principled algorithms, as efficient communication to a large extent depends on the interaction protocols. We study data center networks and peer-to-peer networks as canonical examples of modern-day large-scale networking systems. Server-to-server interaction is an integral part of the data center's operation. The latency of these interactions is often a significant bottleneck toward overall job completion times. We study complementary approaches toward reducing this latency: (i) design of computation algorithms that minimize interaction and (ii) optimal scheduling algorithms to maximally utilize the network fabric. We also consider peer-to-peer networks as an emerging mode of content distribution and sharing. Unlike data centers, these networks are flexible in their network structure and also scale well, but require decentralized algorithms for control. Of central importance here is the design of a network topology that enables efficient peer interactions for optimal application performance. We propose novel topology designs for two popular applications: (i) multimedia streaming and (ii) anonymity in Bitcoin's peer-to-peer network

    Information Width: A Way for the Second Law to Increase Complexity

    Get PDF
    SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant. ©NOTICE: This working paper is included by permission of the contributing author(s) as a means to ensure timely distribution of the scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the author(s). It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may be reposted only with the explicit permission of the copyright holder. www.santafe.edu SANTA FE INSTITUTEInformation Width: a way for the second law to increase complexit
    corecore