12,769 research outputs found

    Cellular-Broadcast Service Convergence through Caching for CoMP Cloud RANs

    Get PDF
    Cellular and Broadcast services have been traditionally treated independently due to the different market requirements, thus resulting in different business models and orthogonal frequency allocations. However, with the advent of cheap memory and smart caching, this traditional paradigm can converge into a single system which can provide both services in an efficient manner. This paper focuses on multimedia delivery through an integrated network, including both a cellular (also known as unicast or broadband) and a broadcast last mile operating over shared spectrum. The subscribers of the network are equipped with a cache which can effectively create zero perceived latency for multimedia delivery, assuming that the content has been proactively and intelligently cached. The main objective of this work is to establish analytically the optimal content popularity threshold, based on a intuitive cost function. In other words, the aim is to derive which content should be broadcasted and which content should be unicasted. To facilitate this, Cooperative Multi- Point (CoMP) joint processing algorithms are employed for the uni and broad-cast PHY transmissions. To practically implement this, the integrated network controller is assumed to have access to traffic statistics in terms of content popularity. Simulation results are provided to assess the gain in terms of total spectral efficiency. A conventional system, where the two networks operate independently, is used as benchmark.Comment: Submitted to IEEE PIMRC 201

    End-to-end QoE optimization through overlay network deployment

    Get PDF
    In this paper an overlay network for end-to-end QoE management is presented. The goal of this infrastructure is QoE optimization by routing around failures in the IP network and optimizing the bandwidth usage on the last mile to the client. The overlay network consists of components that are located both in the core and at the edge of the network. A number of overlay servers perform end-to-end QoS monitoring and maintain an overlay topology, allowing them to route around link failures and congestion. Overlay access components situated at the edge of the network are responsible for determining whether packets are sent to the overlay network, while proxy components manage the bandwidth on the last mile. This paper gives a detailed overview of the end-to-end architecture together with representative experimental results which comprehensively demonstrate the overlay network's ability to optimize the QoE

    Cooperative announcement-based caching for video-on-demand streaming

    Get PDF
    Recently, video-on-demand (VoD) streaming services like Netflix and Hulu have gained a lot of popularity. This has led to a strong increase in bandwidth capacity requirements in the network. To reduce this network load, the design of appropriate caching strategies is of utmost importance. Based on the fact that, typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently, cache replacement strategies have been developed that take advantage of this temporal structure in the video. In this paper, two caching strategies are proposed that additionally take advantage of the phenomenon of binge watching, where users stream multiple consecutive episodes of the same series, reported by recent user behavior studies to become the everyday behavior. Taking into account this information allows us to predict future segment requests, even before the video playout has started. Two strategies are proposed, both with a different level of coordination between the caches in the network. Using a VoD request trace based on binge watching user characteristics, the presented algorithms have been thoroughly evaluated in multiple network topologies with different characteristics, showing their general applicability. It was shown that in a realistic scenario, the proposed election-based caching strategy can outperform the state-of-the-art by 20% in terms of cache hit ratio while using 4% less network bandwidth
    corecore