776 research outputs found

    Adaptive Streaming in P2P Live Video Systems: A Distributed Rate Control Approach

    Get PDF
    Dynamic Adaptive Streaming over HTTP (DASH) is a recently proposed standard that offers different versions of the same media content to adapt the delivery process over the Internet to dynamic bandwidth fluctuations and different user device capabilities. The peer-to-peer (P2P) paradigm for video streaming allows to leverage the cooperation among peers, guaranteeing to serve every video request with increased scalability and reduced cost. We propose to combine these two approaches in a P2P-DASH architecture, exploiting the potentiality of both. The new platform is made of several swarms, and a different DASH representation is streamed within each of them; unlike client-server DASH architectures, where each client autonomously selects which version to download according to current network conditions and to its device resources, we put forth a new rate control strategy implemented at peer site to maintain a good viewing quality to the local user and to simultaneously guarantee the successful operation of the P2P swarms. The effectiveness of the solution is demonstrated through simulation and it indicates that the P2P-DASH platform is able to warrant its users a very good performance, much more satisfying than in a conventional P2P environment where DASH is not employed. Through a comparison with a reference DASH system modeled via the Integer Linear Programming (ILP) approach, the new system is shown to outperform such reference architecture. To further validate the proposal, both in terms of robustness and scalability, system behavior is investigated in the critical condition of a flash crowd, showing that the strong upsurge of new users can be successfully revealed and gradually accommodated.Comment: 12 pages, 17 figures, this work has been submitted to the IEEE journal on selected Area in Communication

    A note on the data-driven capacity of P2P networks

    Get PDF
    We consider two capacity problems in P2P networks. In the first one, the nodes have an infinite amount of data to send and the goal is to optimally allocate their uplink bandwidths such that the demands of every peer in terms of receiving data rate are met. We solve this problem through a mapping from a node-weighted graph featuring two labels per node to a max flow problem on an edge-weighted bipartite graph. In the second problem under consideration, the resource allocation is driven by the availability of the data resource that the peers are interested in sharing. That is a node cannot allocate its uplink resources unless it has data to transmit first. The problem of uplink bandwidth allocation is then equivalent to constructing a set of directed trees in the overlay such that the number of nodes receiving the data is maximized while the uplink capacities of the peers are not exceeded. We show that the problem is NP-complete, and provide a linear programming decomposition decoupling it into a master problem and multiple slave subproblems that can be resolved in polynomial time. We also design a heuristic algorithm in order to compute a suboptimal solution in a reasonable time. This algorithm requires only a local knowledge from nodes, so it should support distributed implementations. We analyze both problems through a series of simulation experiments featuring different network sizes and network densities. On large networks, we compare our heuristic and its variants with a genetic algorithm and show that our heuristic computes the better resource allocation. On smaller networks, we contrast these performances to that of the exact algorithm and show that resource allocation fulfilling a large part of the peer can be found, even for hard configuration where no resources are in excess.Comment: 10 pages, technical report assisting a submissio

    Decision Strategies for a P2P Computing System

    Full text link
    Peer-to-Peer (P2P) computing (also called ‘public-resource computing’) is an effective approach to perform computation of large tasks. Currently used P2P computing systems (e.g., BOINC) are most often centrally managed, i.e., the final result of computations is created at a central node using partial results – what may be not efficient in the case when numerous participants are willing to download the final result. In this paper, we propose a novel approach to P2P computing systems. We assume that results can be delivered to all peers in a distributed way using three types of network flows: unicast, Peer-to-Peer and anycast. We describe our concept of the system architecture with a special focus on the decision strategies developed for system participants. Using our discrete realtime simulator we evaluate the proposed strategies in various scenarios and present a comprehensive analysis of obtained results. According to obtained results, the Peer-to-Peer flow provides lower operational cost of the computing system compared to unicast and anycast flows. Moreover, an appropriate selection of decision strategy can significantly reduce the operational cost

    Diseño centrado en calidad para la difusión Peer-to-Peer de video en vivo

    Get PDF
    El uso de redes Peer-to-Peer (P2P) es una forma escalable para ofrecer servicios de video sobre Internet. Este documento hace foco en la definición, desarrollo y evaluación de una arquitectura P2P para distribuir video en vivo. El diseño global de la red es guiado por la calidad de experiencia (Quality of Experience - QoE), cuyo principal componente en este caso es la calidad del video percibida por los usuarios finales, en lugar del tradicional diseño basado en la calidad de servicio (Quality of Service - QoE) de la mayoría de los sistemas. Para medir la calidad percibida del video, en tiempo real y automáticamente, extendimos la recientemente propuesta metodología Pseudo-Subjective Quality Assessment (PSQA). Dos grandes líneas de investigación son desarrolladas. Primero, proponemos una técnica de distribución de video desde múltiples fuentes con las características de poder ser optimizada para maximizar la calidad percibida en contextos de muchas fallas y de poseer muy baja señalización (a diferencia de los sistemas existentes). Desarrollamos una metodología, basada en PSQA, que nos permite un control fino sobre la forma en que la señal de video es dividida en partes y la cantidad de redundancia agregada, como una función de la dinámica de los usuarios de la red. De esta forma es posible mejorar la robustez del sistema tanto como sea deseado, contemplando el límite de capacidad en la comunicación. En segundo lugar, presentamos un mecanismo estructurado para controlar la topología de la red. La selección de que usuarios servirán a que otros es importante para la robustez de la red, especialmente cuando los usuarios son heterogéneos en sus capacidades y en sus tiempos de conexión.Nuestro diseño maximiza la calidad global esperada (evaluada usando PSQA), seleccionado una topología que mejora la robustez del sistema. Además estudiamos como extender la red con dos servicios complementarios: el video bajo demanda (Video on Demand - VoD) y el servicio MyTV. El desafío en estos servicios es como realizar búsquedas eficientes sobre la librería de videos, dado al alto dinamismo del contenido. Presentamos una estrategia de "caching" para las búsquedas en estos servicios, que maximiza el número total de respuestas correctas a las consultas, considerando una dinámica particular en los contenidos y restricciones de ancho de banda. Nuestro diseño global considera escenarios reales, donde los casos de prueba y los parámetros de configuración surgen de datos reales de un servicio de referencia en producción. Nuestro prototipo es completamente funcional, de uso gratuito, y basado en tecnologías bien probadas de código abierto

    Quality-aware segment transmission scheduling in peer-to-peer streaming systems

    Full text link

    Community computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-186).In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or physically inconvenient, that one individual is unable to accomplish efficiently. We introduce community coding, where information systems meet social networks, to tackle some of the challenges in this new paradigm of community computation. We design algorithms, protocols and build system prototypes to demonstrate the power of community computation to better deal with reliability, scalability and security issues, which are the main challenges in many emerging community-computing environments, in several application scenarios such as community storage, community sensing and community security. For example, we develop a community storage system that is based upon a distributed P2P (peer-to-peer) storage paradigm, where we take an array of small, periodically accessible, individual computers/peer nodes and create a secure, reliable and large distributed storage system. The goal is for each one of them to act as if they have immediate access to a pool of information that is larger than they could hold themselves, and into which they can contribute new stuff in a both open and secure manner. Such a contributory and self-scaling community storage system is particularly useful where reliable infrastructure is not readily available in that such a system facilitates easy ad-hoc construction and easy portability. In another application scenario, we develop a novel framework of community sensing with a group of image sensors. The goal is to present a set of novel tools in which software, rather than humans, examines the collection of images sensed by a group of image sensors to determine what is happening in the field of view. We also present several design principles in the aspects of community security. In one application example, we present community-based email spain detection approach to deal with email spams more efficiently.by Fulu Li.Ph.D

    Peer-Assisted Social Media Streaming With Social Reciprocity

    Get PDF
    published_or_final_versio

    Optimal QoE Scheduling in MPEG-DASH Video Streaming

    Get PDF
    DASH is a popular technology for video streaming over the Internet. However, the quality of experience (QoE), a measure of humans’ perceived satisfaction of the quality of these streamed videos, is their subjective opinion, which is difficult to evaluate. Previous studies only considered network-based indices and focused on them to provide smooth video playback instead of improving the true QoE experienced by humans. In this study, we designed a series of click density experiments to verify whether different resolutions could affect the QoE for different video scenes. We observed that, in a single video segment, different scenes with the same resolution could affect the viewer’s QoE differently. It is true that the user’s satisfaction as a result of watching high-resolution video segments is always greater than that when watching low-resolution video segments of the same scenes. However, the most important observation is that low-resolution video segments yield higher viewing QoE gain in slow motion scenes than in fast motion scenes. Thus, the inclusion of more high-resolution segments in the fast motion scenes and more low-resolution segments in the slow motion scenes would be expected to maximize the user’s viewing QoE. In this study, to evaluate the user’s true experience, we convert the viewing QoE into a satisfaction quality score, termed the Q-score, for scenes with different resolutions in each video segment. Additionally, we developed an optimal segment assignment (OSA) algorithm for Q-score optimization in environments characterized by a constrained network bandwidth. Our experimental results show that application of the OSA algorithm to the playback schedule significantly improved users’ viewing satisfaction

    A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity

    Full text link
    Due to missing IP multicast support on an Internet scale, over-the-top media streams are delivered with the help of overlays as used by content delivery networks and their peer-to-peer (P2P) extensions. In this context, mesh/pull-based swarming plays an important role either as pure streaming approach or in combination with tree/push mechanisms. However, the impact of realistic client populations with heterogeneous resources is not yet fully understood. In this technical report, we contribute to closing this gap by mathematically analysing the most basic scheduling mechanisms latest deadline first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain framework and combining them into a simple, yet powerful, mixed strategy to leverage inherent differences in client resources. The main contributions are twofold: (1) a mathematical framework for swarming on random graphs is proposed with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed strategy, named SchedMix, is proposed that leverages peer heterogeneity. The proposed strategy, SchedMix is shown to outperform the other two strategies using different abstractions: a mean-field theoretic analysis of buffer probabilities, simulations of a stochastic model on random graphs, and a full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to http://ieeexplore.ieee.org/document/7497234
    • …
    corecore