1,871 research outputs found

    Intelligent cache and buffer optimization for mobile VR adaptive transmission in 5G edge computing networks

    Get PDF
    Virtual Reality (VR) is a key industry for the development of the digital economy in the future. Mobile VR has advantages in terms of mobility, lightweight and cost-effectiveness, which has gradually become the mainstream implementation of VR. In this paper, a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing (MEC)-equipped 5G networks is proposed, aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission. To support VR content proactive caching and intelligent buffer management, users' behavioral similarity and users’ head movement trajectory are jointly used for viewpoint prediction. The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content. Second, a hierarchical buffer-based adaptive update algorithm is presented, which jointly considers bandwidth, buffer, and predicted viewpoint status to update the tile chunk in client buffer. Then, according to the decomposition of the problem, the buffer update problem is modeled as an optimization problem, and the corresponding solution algorithms are presented. Finally, the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations, and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%

    Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching

    Full text link
    Proactive caching is an effective way to alleviate peak-hour traffic congestion by prefetching popular contents at the wireless network edge. To maximize the caching efficiency requires the knowledge of content popularity profile, which however is often unavailable in advance. In this paper, we first propose a new linear prediction model, named grouped linear model (GLM) to estimate the future content requests based on historical data. Unlike many existing works that assumed the static content popularity profile, our model can adapt to the temporal variation of the content popularity in practical systems due to the arrival of new contents and dynamics of user preference. Based on the predicted content requests, we then propose a reinforcement learning approach with model-free acceleration (RLMA) for online cache replacement by taking into account both the cache hits and replacement cost. This approach accelerates the learning process in non-stationary environment by generating imaginary samples for Q-value updates. Numerical results based on real-world traces show that the proposed prediction and learning based online caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho

    Content Popularity Prediction Towards Location-Aware Mobile Edge Caching

    Full text link
    Mobile edge caching enables content delivery within the radio access network, which effectively alleviates the backhaul burden and reduces response time. To fully exploit edge storage resources, the most popular contents should be identified and cached. Observing that user demands on certain contents vary greatly at different locations, this paper devises location-customized caching schemes to maximize the total content hit rate. Specifically, a linear model is used to estimate the future content hit rate. For the case where the model noise is zero-mean, a ridge regression based online algorithm with positive perturbation is proposed. Regret analysis indicates that the proposed algorithm asymptotically approaches the optimal caching strategy in the long run. When the noise structure is unknown, an H∞H_{\infty} filter based online algorithm is further proposed by taking a prescribed threshold as input, which guarantees prediction accuracy even under the worst-case noise process. Both online algorithms require no training phases, and hence are robust to the time-varying user demands. The underlying causes of estimation errors of both algorithms are numerically analyzed. Moreover, extensive experiments on real world dataset are conducted to validate the applicability of the proposed algorithms. It is demonstrated that those algorithms can be applied to scenarios with different noise features, and are able to make adaptive caching decisions, achieving content hit rate that is comparable to that via the hindsight optimal strategy.Comment: to appear in IEEE Trans. Multimedi
    • …
    corecore