6 research outputs found

    Low-Latency and Fresh Content Provision in Information-Centric Vehicular Networks

    Get PDF
    In this paper, the content service provision of information-centric vehicular networks (ICVNs) is investigated from the aspect of mobile edge caching, considering the dynamic driving-related context information. To provide up-to-date information with low latency, two schemes are designed for cache update and content delivery at the roadside units (RSUs). The roadside unit centric (RSUC) scheme decouples cache update and content delivery through bandwidth splitting, where the cached content items are updated regularly in a round-robin manner. The request adaptive (ReA) scheme updates the cached content items upon user requests with certain probabilities. The performance of both proposed schemes are analyzed, whereby the average age of information (AoI) and service latency are derived in closed forms. Surprisingly, the AoI-latency trade-off does not always exist, and frequent cache update can degrade both performances. Thus, the RSUC and ReA schemes are further optimized to balance the AoI and latency. Extensive simulations are conducted on SUMO and OMNeT++ simulators, and the results show that the proposed schemes can reduce service latency by up to 80% while guaranteeing content freshness in heavily loaded ICVNs

    User Dynamics-Aware Edge Caching and Computing for Mobile Virtual Reality

    Full text link
    In this paper, we present a novel content caching and delivery approach for mobile virtual reality (VR) video streaming. The proposed approach aims to maximize VR video streaming performance, i.e., minimizing video frame missing rate, by proactively caching popular VR video chunks and adaptively scheduling computing resources at an edge server based on user and network dynamics. First, we design a scalable content placement scheme for deciding which video chunks to cache at the edge server based on tradeoffs between computing and caching resource consumption. Second, we propose a machine learning-assisted VR video delivery scheme, which allocates computing resources at the edge server to satisfy video delivery requests from multiple VR headsets. A Whittle index-based method is adopted to reduce the video frame missing rate by identifying network and user dynamics with low signaling overhead. Simulation results demonstrate that the proposed approach can significantly improve VR video streaming performance over conventional caching and computing resource scheduling strategies.Comment: 38 pages, 13 figures, single column double spaced, published in IEEE Journal of Selected Topics in Signal Processin

    Randomised Geographic Caching and its Applications in Wireless Networks

    Get PDF
    The randomised (or probabilistic) geographic caching is a proactive content placement strategy that has attracted a lot of attention, because it can simplify a great deal cache-management problems at the wireless edge. It diversifies content placement over caches and applies to scenarios where a request can be possibly served by multiple cache memories. Its simplicity and strength is due to randomisation. It allows one to formulate continuous optimisation problems for content placement over large homogeneous geographic areas. These can be solved to optimality by standard convex methods, and can even provide closed-form solutions for specific cases. This way the algorithmic obstacles from NP-hardness are avoided and optimal solutions can be derived with low computational cost. Randomised caching has a large spectrum of applications in real-world wireless problems, including femto-caching, multi-tier networks, device-to-device communications, mobility, mm-wave, security, UAVs, and more. In this chapter we will formally present the main policy with its applications in various wireless scenarios. We will further introduce some very useful extensions related to unequal file-sizes and content placement with neighbourhood dependence

    The Design of Dynamic Probabilistic Caching with Time-Varying Content Popularity

    Get PDF
    In this paper, we design dynamic probabilistic caching for the scenario when the instantaneous content popularity may vary with time while it is possible to predict the average content popularity over a time window. Based on the average content popularity, optimal content caching probabilities can be found, e.g., from solving optimization problems, and existing results in the literature can implement the optimal caching probabilities via static content placement. The objective of this work is to design dynamic probabilistic caching that: i) converge (in distribution) to the optimal content caching probabilities under time-invariant content popularity, and ii) adapt to the time-varying instantaneous content popularity under time-varying content popularity. Achieving the above objective requires a novel design of dynamic content replacement because static caching cannot adapt to varying content popularity while classic dynamic replacement policies, such as LRU, cannot converge to target caching probabilities (as they do not exploit any content popularity information). We model the design of dynamic probabilistic replacement policy as the problem of finding the state transition probability matrix of a Markov chain and propose a method to generate and refine the transition probability matrix. Extensive numerical results are provided to validate the effectiveness of the proposed design
    corecore