594 research outputs found

    Adaptive Replication in Distributed Content Delivery Networks

    Full text link
    We address the problem of content replication in large distributed content delivery networks, composed of a data center assisted by many small servers with limited capabilities and located at the edge of the network. The objective is to optimize the placement of contents on the servers to offload as much as possible the data center. We model the system constituted by the small servers as a loss network, each loss corresponding to a request to the data center. Based on large system / storage behavior, we obtain an asymptotic formula for the optimal replication of contents and propose adaptive schemes related to those encountered in cache networks but reacting here to loss events, and faster algorithms generating virtual events at higher rate while keeping the same target replication. We show through simulations that our adaptive schemes outperform significantly standard replication strategies both in terms of loss rates and adaptation speed.Comment: 10 pages, 5 figure

    Cloud-based Content Distribution on a Budget

    Full text link
    To leverage the elastic nature of cloud computing, a solution provider must be able to accurately gauge demand for its offering. For applications that involve swarm-to-cloud interactions, gauging such demand is not straightforward. In this paper, we propose a general framework, analyze a mathematical model, and present a prototype implementation of a canonical swarm-to-cloud application, namely peer-assisted content delivery. Our system – called Cyclops – dynamically adjusts the off-cloud bandwidth consumed by content servers (which represents the bulk of the provider's cost) to feed a set of swarming clients, based on a feedback signal that gauges the real-time health of the swarm. Our extensive evaluation of Cyclops in a variety of settings – including controlled PlanetLab and live Internet experiments involving thousands of users – show significant reduction in content distribution costs (by as much as two orders of magnitude) when compared to non-feedback-based swarming solutions, with minor impact on content delivery times

    Approximations fluides pour des modèles stochastiques en télécommunications

    Get PDF
    When modeling systems for their performance evaluation, one privileged tool is some form of Markov process, because of the rich set of results and associated algorithms. The drawback is that sometimes, the process has a huge number of states, or an infinite state space. In these situations, since analytical results are rare, almost always the solution to analyze the models is simulation. In this thesis we explore another possibility, called fluid limits, where a sequence of models is built with some parameter N associated with the individual model's size, in such a way that the performances of the Nth model gets close to that of the original system when N goes to infinity. We consider three families of systems/models and we explore this approach, obtaining results focused on understanding the meaning of this convergence phenomenon, and on the properties of the limiting models.Lorsqu'on modélise un système pour évaluer ses performances, l'un des outils principaux est le processus de Markov, pour la richesse des résultats et des algorithmes associés. L'inconvénient est que parfois, le modèle résultant a une énorme quantité d'états, voire un espace d'état infini. Dans ces situations, dans la mesure où les résultats analytiques sont rares, presque toujours la seule solution disponible pour l'analysis des modèles est la simulation. Dans cette thèse nous explorons une autre possibilité, appelée limites fluides, où une séquence de modèles est construite, avec un paramètre N relié à la taille de chaque modèle de la séquence, de telle sorte que les performances du Nème modèle sont proches de celles du système d'origine, quand N tends vers l'infini. Nous considérons 3 familles de systèmes/modèles et nous explorons cette approche, en obtenant des résultats focalisés sur la compréhension de ce phénomène de convergence et sur les propriétés des modèles limites

    Mathematical analysis of scheduling policies in peer-to-peer video streaming networks

    Get PDF
    Las redes de pares son comunidades virtuales autogestionadas, desarrolladas en la capa de aplicación sobre la infraestructura de Internet, donde los usuarios (denominados pares) comparten recursos (ancho de banda, memoria, procesamiento) para alcanzar un fin común. La distribución de video representa la aplicación más desafiante, dadas las limitaciones de ancho de banda. Existen básicamente tres servicios de video. El más simple es la descarga, donde un conjunto de servidores posee el contenido original, y los usuarios deben descargar completamente este contenido previo a su reproducción. Un segundo servicio se denomina video bajo demanda, donde los pares se unen a una red virtual siempre que inicien una solicitud de un contenido de video, e inician una descarga progresiva en línea. El último servicio es video en vivo, donde el contenido de video es generado, distribuido y visualizado simultáneamente. En esta tesis se estudian aspectos de diseño para la distribución de video en vivo y bajo demanda. Se presenta un análisis matemático de estabilidad y capacidad de arquitecturas de distribución bajo demanda híbridas, asistidas por pares. Los pares inician descargas concurrentes de múltiples contenidos, y se desconectan cuando lo desean. Se predice la evolución esperada del sistema asumiendo proceso Poisson de arribos y egresos exponenciales, mediante un modelo determinístico de fluidos. Un sub-modelo de descargas secuenciales (no simultáneas) es globalmente y estructuralmente estable, independientemente de los parámetros de la red. Mediante la Ley de Little se determina el tiempo medio de residencia de usuarios en un sistema bajo demanda secuencial estacionario. Se demuestra teóricamente que la filosofía híbrida de cooperación entre pares siempre desempeña mejor que la tecnología pura basada en cliente-servidor

    French Roadmap for complex Systems 2008-2009

    Get PDF
    This second issue of the French Complex Systems Roadmap is the outcome of the Entretiens de Cargese 2008, an interdisciplinary brainstorming session organized over one week in 2008, jointly by RNSC, ISC-PIF and IXXI. It capitalizes on the first roadmap and gathers contributions of more than 70 scientists from major French institutions. The aim of this roadmap is to foster the coordination of the complex systems community on focused topics and questions, as well as to present contributions and challenges in the complex systems sciences and complexity science to the public, political and industrial spheres

    Distributed Selfish Coaching

    Full text link
    Although cooperation generally increases the amount of resources available to a community of nodes, thus improving individual and collective performance, it also allows for the appearance of potential mistreatment problems through the exposition of one node's resources to others. We study such concerns by considering a group of independent, rational, self-aware nodes that cooperate using on-line caching algorithms, where the exposed resource is the storage at each node. Motivated by content networking applications -- including web caching, CDNs, and P2P -- this paper extends our previous work on the on-line version of the problem, which was conducted under a game-theoretic framework, and limited to object replication. We identify and investigate two causes of mistreatment: (1) cache state interactions (due to the cooperative servicing of requests) and (2) the adoption of a common scheme for cache management policies. Using analytic models, numerical solutions of these models, as well as simulation experiments, we show that on-line cooperation schemes using caching are fairly robust to mistreatment caused by state interactions. To appear in a substantial manner, the interaction through the exchange of miss-streams has to be very intense, making it feasible for the mistreated nodes to detect and react to exploitation. This robustness ceases to exist when nodes fetch and store objects in response to remote requests, i.e., when they operate as Level-2 caches (or proxies) for other nodes. Regarding mistreatment due to a common scheme, we show that this can easily take place when the "outlier" characteristics of some of the nodes get overlooked. This finding underscores the importance of allowing cooperative caching nodes the flexibility of choosing from a diverse set of schemes to fit the peculiarities of individual nodes. To that end, we outline an emulation-based framework for the development of mistreatment-resilient distributed selfish caching schemes. Our framework utilizes a simple control-theoretic approach to dynamically parameterize the cache management scheme. We show performance evaluation results that quantify the benefits from instantiating such a framework, which could be substantial under skewed demand profiles.National Science Foundation (CNS Cybertrust 0524477, CNS NeTS 0520166, CNS ITR 0205294, EIA RI 0202067); EU IST (CASCADAS and E-NEXT); Marie Curie Outgoing International Fellowship of the EU (MOIF-CT-2005-007230

    Reducing the Download Time in Stochastic P2P Content Delivery Networks by Improving Peer Selection

    Get PDF
    Peer-to-peer (P2P) applications have become a popular method for obtaining digital content. Recent research has shown that the amount of time spent downloading from a poor performing peer effects the total download duration. Current peer selection strategies attempt to limit the amount of time spent downloading from a poor performing peer, but they do not use both advanced knowledge and service capacity after the connection has been made to aid in peer selection. Advanced knowledge has traditionally been obtained from methods that add additional overhead to the P2P network, such as polling peers for service capacity information, using round trip time techniques to calculate the distance between peers, and by using tracker peers. This work investigated the creation of a new download strategy that replaced the random selection of peers with a method that selects server peers based on historic service capacity and ISP in order to further reduce the amount of time needed to complete a download session. Peer-to-peer (P2P) applications have become a popular method for obtaining digital content. Recent research has shown that the amount of time spent downloading from a poor performing peer effects the total download duration. Current peer selection strategies attempt to limit the amount of time spent downloading from a poor performing peer, but they do not use both advanced knowledge and service capacity after the connection has been made to aid in peer selection. Advanced knowledge has traditionally been obtained from methods that add additional overhead to the P2P network, such as polling peers for service capacity information, using round trip time techniques to calculate the distance between peers, and by using tracker peers. This work investigated the creation of a new download strategy that replaced the random selection of peers with a method that selects server peers based on historic service capacity and ISP in order to further reduce the amount of time needed to complete a download session. The results of this new historic based peer selection strategy have shown that there are benefits in using advanced knowledge to select peers and only replacing the worst performing peers. This new approach showed an average download duration improvement of 16.6% in the single client simulation and an average cross ISP traffic reduction of 55.17% when ISPs were participating in cross ISP throttling. In the multiple clients simulation the new approach showed an average download duration improvement of 53.31% and an average cross ISP traffic reduction of 88.83% when ISPs were participating in cross ISP throttling. This new approach also significantly improved the consistency of the download duration between download sessions allowing for the more accurate prediction of download times
    • …
    corecore