1,492 research outputs found

    An autonomic delivery framework for HTTP adaptive streaming in multicast-enabled multimedia access networks

    Get PDF
    The consumption of multimedia services over HTTP-based delivery mechanisms has recently gained popularity due to their increased flexibility and reliability. Traditional broadcast TV channels are now offered over the Internet, in order to support Live TV for a broad range of consumer devices. Moreover, service providers can greatly benefit from offering external live content (e. g., YouTube, Hulu) in a managed way. Recently, HTTP Adaptive Streaming (HAS) techniques have been proposed in which video clients dynamically adapt their requested video quality level based on the current network and device state. Unlike linear TV, traditional HTTP- and HAS-based video streaming services depend on unicast sessions, leading to a network traffic load proportional to the number of multimedia consumers. In this paper we propose a novel HAS-based video delivery architecture, which features intelligent multicasting and caching in order to decrease the required bandwidth considerably in a Live TV scenario. Furthermore we discuss the autonomic selection of multicasted content to support Video on Demand (VoD) sessions. Experiments were conducted on a large scale and realistic emulation environment and compared with a traditional HAS-based media delivery setup using only unicast connections

    Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

    Get PDF
    In this paper, we address the problem of efficiently streaming a set of heterogeneous videos from a remote server through a proxy to multiple asynchronous clients so that they can experience playback with low startup delays. We develop a technique to analytically determine the optimal proxy prefix cache allocation to the videos that minimizes the aggregate network bandwidth cost. We integrate proxy caching with traditional serverbased reactive transmission schemes such as batching, patching and stream merging to develop a set of proxy-assisted delivery schemes. We quantitatively explore the impact of the choice of transmission scheme, cache allocation policy, proxy cache size, and availability of unicast versus multicast capability, on the resultant transmission cost. Our evaluations show that even a relatively small prefix cache (10%-20% of the video repository) is sufficient to realize substantial savings in transmission cost. We find that carefully designed proxy-assisted reactive transmission schemes can produce significant cost savings even in predominantly unicast environments such as the Internet

    Big Data Meets Telcos: A Proactive Caching Perspective

    Full text link
    Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: velocity, voracity, volume and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platform and the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4 Gbyte of storage size (87% of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.Comment: 8 pages, 5 figure

    Fundamental Limits of Caching in Wireless D2D Networks

    Full text link
    We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an ``infrastructure'' setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this work, we consider a D2D ``infrastructure-less'' version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery, achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is therefore natural to ask whether these two gains are cumulative, i.e.,if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information Theory, This is the extended version of the conference (ITW) paper arXiv:1304.585
    corecore