559 research outputs found

    Shrinking VOD Traffic via RĂ©nyi-Entropic Optimal Transport

    Get PDF
    In response to the exponential surge in Internet Video on Demand (VOD) traffic, numerous research endeavors have concentrated on optimizing and enhancing infrastructure efficiency. In contrast, this paper explores whether users’ demand patterns can be shaped to reduce the pressure on infrastructure. Our main idea is to design a mechanism that alters the distribution of user requests to another distribution which is much more cache-efficient, but still remains ‘close enough’ (in the sense of cost) to fulfil each individual user’s preference. To quantify the cache footprint of VOD traffic, we propose a novel application of RĂ©nyi entropy as its proxy, capturing the ‘richness’ (the number of distinct videos or cache size) and the ‘evenness’ (the relative popularity of video accesses) of the on-demand video distribution. We then demonstrate how to decrease this metric by formulating a problem drawing on the mathematical theory of optimal transport (OT). Additionally, we establish a key equivalence theorem: minimizing RĂ©nyi entropy corresponds to maximizing soft cache hit ratio (SCHR) — a variant of cache hit ratio allowing similarity-based video substitutions. Evaluation on a real-world, city-scale video viewing dataset reveals a remarkable 83% reduction in cache size (associated with VOD caching traffic). Crucially, in alignment with the above-mentioned equivalence theorem, our approach yields a significant uplift to SCHR, achieving close to 100%

    Proactive content caching in future generation communication networks: Energy and security considerations

    Get PDF
    The proliferation of hand-held devices and Internet of Things (IoT) applications has heightened demand for popular content download. A high volume of content streaming/downloading services during peak hours can cause network congestion. Proactive content caching has emerged as a prospective solution to tackle this congestion problem. In proactive content caching, data storage units are used to store popular content in helper nodes at the network edge. This contributes to a reduction of peak traffic load and network congestion. However, data storage units require additional energy, which offers a challenge to researchers that intend to reduce energy consumption up to 90% in next generation networks. This thesis presents proactive content caching techniques to reduce grid energy consumption by utilizing renewable energy sources to power-up data storage units in helper nodes. The integration of renewable energy sources with proactive caching is a significant challenge due to the intermittent nature of renewable energy sources and investment costs. In this thesis, this challenge is tackled by introducing strategies to determine the optimal time of the day for content caching and optimal scheduling of caching nodes. The proposed strategies consider not only the availability of renewable energy but also temporal changes in network trac to reduce associated energy costs. While proactive caching can facilitate the reduction of peak trac load and the integration of renewable energy, cached content objects at helper nodes are often more vulnerable to malicious attacks due to less stringent security at edge nodes. Potential content leakage can lead to catastrophic consequences, particularly for cache-equipped Industrial Internet of Things (IIoT) applications. In this thesis, the concept of \trusted caching nodes (TCNs) is introduced. TCNs cache popular content objects and provide security services to connected links. The proposed study optimally allocates TCNs and selects the most suitable content forwarding paths. Furthermore, a caching strategy is designed for mobile edge computing systems to support IoT task offloading. The strategy optimally assigns security resources to offloaded tasks while satisfying their individual requirements. However, security measures often contribute to overheads in terms of both energy consumption and delay. Consequently, in this thesis, caching techniques have been designed to investigate the trade-off between energy consumption and probable security breaches. Overall, this thesis contributes to the current literature by simultaneously investigating energy and security aspects of caching systems whilst introducing solutions to relevant research problems

    Fairness in Network-Friendly Recommendations

    Full text link
    As mobile traffic is dominated by content services (e.g., video), which typically use recommendation systems, the paradigm of network-friendly recommendations (NFR) has been proposed recently to boost the network performance by promoting content that can be efficiently delivered (e.g., cached at the edge). NFR increase the network performance, however, at the cost of being unfair towards certain contents when compared to the standard recommendations. This unfairness is a side effect of NFR that has not been studied in literature. Nevertheless, retaining fairness among contents is a key operational requirement for content providers. This paper is the first to study the fairness in NFR, and design fair-NFR. Specifically, we use a set of metrics that capture different notions of fairness, and study the unfairness created by existing NFR schemes. Our analysis reveals that NFR can be significantly unfair. We identify an inherent trade-off between the network gains achieved by NFR and the resulting unfairness, and derive bounds for this trade-off. We show that existing NFR schemes frequently operate far from the bounds, i.e., there is room for improvement. To this end, we formulate the design of Fair-NFR (i.e., NFR with fairness guarantees compared to the baseline recommendations) as a linear optimization problem. Our results show that the Fair-NFR can achieve high network gains (similar to non-fair-NFR) with little unfairness.Comment: IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 202

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • 

    corecore