107,384 research outputs found

    Fundamental Limits of Caching

    Full text link
    Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e, the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared to previously known schemes. In particular, the improvement can be on the order of the number of users in the network. Moreover, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.Comment: To appear in IEEE Transactions on Information Theor

    Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    Get PDF
    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends

    Improving BitTorrent's Peer Selection For Multimedia Content On-Demand Delivery

    Get PDF
    The great efficiency achieved by the BitTorrent protocol for the distribution of large amounts of data inspired its adoption to provide multimedia content on-demand delivery over the Internet. As it is not designed for this purpose, some adjustments have been proposed in order to meet the related QoS requirements like low startup delay and smooth playback continuity. Accordingly, this paper introduces a BitTorrent-like proposal named as Quota-Based Peer Selection (QBPS). This proposal is mainly based on the adaptation of the original peer-selection policy of the BitTorrent protocol. Its validation is achieved by means of simulations and competitive analysis. The final results show that QBPS outperforms other recent proposals of the literature. For instance, it achieves a throughput optimization of up to 48.0% in low-provision capacity scenarios where users are very interactive.Comment: International Journal of Computer Networks & Communications(IJCNC) Vol.7, No.6, November 201

    Social-aware Opportunistic Routing Protocol based on User's Interactions and Interests

    Full text link
    Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users' constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users' social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenging networks

    Design and analysis of a beacon-less routing protocol for large volume content dissemination in vehicular ad hoc networks

    Get PDF
    Largevolumecontentdisseminationispursuedbythegrowingnumberofhighquality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well
    corecore