5,052 research outputs found

    A parallel downloading algorithm for redundant networks

    Full text link
    In this paper, we study the downloading mechanism of BitTorrent (or BT), a P2P based popular and convenient parallel downloading software tool, point out some of its limitations, and propose an algorithm to improve its performance. In particular, we address the limitations of BT by using neighbours in P2P networks to resolve the redundant copies problem and to optimise the downloading speed. Our preliminary experiments show that the proposed enhancement algorithm works well

    Content-access QoS in peer-to-peer networks using a fast MDS erasure code

    Get PDF
    This paper describes an enhancement of content access Quality of Service in peer to peer (P2P) networks. The main idea is to use an erasure code to distribute the information over the peers. This distribution increases the users’ choice on disseminated encoded data and therefore statistically enhances the overall throughput of the transfer. A performance evaluation based on an original model using the results of a measurement campaign of sequential and parallel downloads in a real P2P network over Internet is presented. Based on a bandwidth distribution, statistical content-access QoS are guaranteed in function of both the content replication level in the network and the file dissemination strategies. A simple application in the context of media streaming is proposed. Finally, the constraints on the erasure code related to the proposed system are analysed and a new fast MDS erasure code is proposed, implemented and evaluated

    TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes

    Full text link
    Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests

    When Queueing Meets Coding: Optimal-Latency Data Retrieving Scheme in Storage Clouds

    Full text link
    In this paper, we study the problem of reducing the delay of downloading data from cloud storage systems by leveraging multiple parallel threads, assuming that the data has been encoded and stored in the clouds using fixed rate forward error correction (FEC) codes with parameters (n, k). That is, each file is divided into k equal-sized chunks, which are then expanded into n chunks such that any k chunks out of the n are sufficient to successfully restore the original file. The model can be depicted as a multiple-server queue with arrivals of data retrieving requests and a server corresponding to a thread. However, this is not a typical queueing model because a server can terminate its operation, depending on when other servers complete their service (due to the redundancy that is spread across the threads). Hence, to the best of our knowledge, the analysis of this queueing model remains quite uncharted. Recent traces from Amazon S3 show that the time to retrieve a fixed size chunk is random and can be approximated as a constant delay plus an i.i.d. exponentially distributed random variable. For the tractability of the theoretical analysis, we assume that the chunk downloading time is i.i.d. exponentially distributed. Under this assumption, we show that any work-conserving scheme is delay-optimal among all on-line scheduling schemes when k = 1. When k > 1, we find that a simple greedy scheme, which allocates all available threads to the head of line request, is delay optimal among all on-line scheduling schemes. We also provide some numerical results that point to the limitations of the exponential assumption, and suggest further research directions.Comment: Original accepted by IEEE Infocom 2014, 9 pages. Some statements in the Infocom paper are correcte

    Minimizing the impact of delay on live SVC-based HTTP adaptive streaming services

    Get PDF
    HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming services. Video content is temporally split into segments which are offered at multiple qualities to the clients. These clients autonomously select the quality layer matching the current state of the network through a quality selection heuristic. Recently, academia and industry have begun evaluating the feasibility of adopting layered video coding for HAS. Instead of downloading one file for a certain quality level, scalable video streaming requires downloading several interdependent layers to obtain the same quality. This implies that the base layer is always downloaded and is available for playout, even when throughput fluctuates and enhancement layers can not be downloaded in time. This layered video approach can help in providing better service quality assurance for video streaming. However, adopting scalable video coding for HAS also leads to other issues, since requesting multiple files over HTTP leads to an increased impact of the end-to-end delay and thus on the service provided to the client. This is even worse in a Live TV scenario where the drift on the live signal should be minimized, requiring smaller segment and buffer sizes. In this paper, we characterize the impact of delay on several measurement-based heuristics. Furthermore, we propose several ways to overcome the end-to-end delay issues, such as parallel and pipelined downloading of segment layers, to provide a higher quality for the video service

    Reliable downloading algorithms for bittorrent-like systems

    Full text link
    In this paper we study a reliable downloading algorithm for BitTorrent-like systems, and attest it in mathematics. BitTorrent-like systems have become immensely popular peer-to-peer file distribution tools in the internet in recent years. We analyze them in theory and point out some of their limitations especially in reliability, and propose an algorithm to resolve these problems by using the redundant copies in neighbors in P2P networks and can further optimize the downloading speed in some condition. Our preliminary simulations show that the proposed reliable algorithm works well; the improved BitTorrent-like systems are very stable and reliable.<br /

    Architecture for Cooperative Prefetching in P2P Video-on- Demand System

    Full text link
    Most P2P VoD schemes focused on service architectures and overlays optimization without considering segments rarity and the performance of prefetching strategies. As a result, they cannot better support VCRoriented service in heterogeneous environment having clients using free VCR controls. Despite the remarkable popularity in VoD systems, there exist no prior work that studies the performance gap between different prefetching strategies. In this paper, we analyze and understand the performance of different prefetching strategies. Our analytical characterization brings us not only a better understanding of several fundamental tradeoffs in prefetching strategies, but also important insights on the design of P2P VoD system. On the basis of this analysis, we finally proposed a cooperative prefetching strategy called "cooching". In this strategy, the requested segments in VCR interactivities are prefetched into session beforehand using the information collected through gossips. We evaluate our strategy through extensive simulations. The results indicate that the proposed strategy outperforms the existing prefetching mechanisms.Comment: 13 Pages, IJCN

    Towards SVC-based adaptive streaming in information centric networks

    Get PDF
    HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for video streaming services. In HAS, each video is segmented and stored in different qualities. The client can dynamically select the most appropriate quality level to download, allowing it to adapt to varying network conditions. As the Internet was not designed to deliver such applications, optimal support for multimedia delivery is still missing. Information Centric Networking (ICN) is a recently proposed disruptive architecture that could solve this issue, where the focus is given to the content rather than to end-to-end connectivity. Due to the bandwidth unpredictability typical of ICN, standard AVC-based HAS performs quality selection sub-optimally, thus leading to a poor Quality of Experience (QoE). In this article, we propose to overcome this inefficiency by using Scalable Video Coding (SVC) instead. We individuate the main advantages of SVC-based HAS over ICN and outline, both theoretically and via simulation, the research challenges to be addressed to optimize the delivered QoE
    • …
    corecore