1,819 research outputs found

    Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching

    Full text link
    Proactive caching is an effective way to alleviate peak-hour traffic congestion by prefetching popular contents at the wireless network edge. To maximize the caching efficiency requires the knowledge of content popularity profile, which however is often unavailable in advance. In this paper, we first propose a new linear prediction model, named grouped linear model (GLM) to estimate the future content requests based on historical data. Unlike many existing works that assumed the static content popularity profile, our model can adapt to the temporal variation of the content popularity in practical systems due to the arrival of new contents and dynamics of user preference. Based on the predicted content requests, we then propose a reinforcement learning approach with model-free acceleration (RLMA) for online cache replacement by taking into account both the cache hits and replacement cost. This approach accelerates the learning process in non-stationary environment by generating imaginary samples for Q-value updates. Numerical results based on real-world traces show that the proposed prediction and learning based online caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho

    Content Popularity Prediction Towards Location-Aware Mobile Edge Caching

    Full text link
    Mobile edge caching enables content delivery within the radio access network, which effectively alleviates the backhaul burden and reduces response time. To fully exploit edge storage resources, the most popular contents should be identified and cached. Observing that user demands on certain contents vary greatly at different locations, this paper devises location-customized caching schemes to maximize the total content hit rate. Specifically, a linear model is used to estimate the future content hit rate. For the case where the model noise is zero-mean, a ridge regression based online algorithm with positive perturbation is proposed. Regret analysis indicates that the proposed algorithm asymptotically approaches the optimal caching strategy in the long run. When the noise structure is unknown, an H∞H_{\infty} filter based online algorithm is further proposed by taking a prescribed threshold as input, which guarantees prediction accuracy even under the worst-case noise process. Both online algorithms require no training phases, and hence are robust to the time-varying user demands. The underlying causes of estimation errors of both algorithms are numerically analyzed. Moreover, extensive experiments on real world dataset are conducted to validate the applicability of the proposed algorithms. It is demonstrated that those algorithms can be applied to scenarios with different noise features, and are able to make adaptive caching decisions, achieving content hit rate that is comparable to that via the hindsight optimal strategy.Comment: to appear in IEEE Trans. Multimedi

    Spatio-temporal Edge Service Placement: A Bandit Learning Approach

    Full text link
    Shared edge computing platforms deployed at the radio access network are expected to significantly improve quality of service delivered by Application Service Providers (ASPs) in a flexible and economic way. However, placing edge service in every possible edge site by an ASP is practically infeasible due to the ASP's prohibitive budget requirement. In this paper, we investigate the edge service placement problem of an ASP under a limited budget, where the ASP dynamically rents computing/storage resources in edge sites to host its applications in close proximity to end users. Since the benefit of placing edge service in a specific site is usually unknown to the ASP a priori, optimal placement decisions must be made while learning this benefit. We pose this problem as a novel combinatorial contextual bandit learning problem. It is "combinatorial" because only a limited number of edge sites can be rented to provide the edge service given the ASP's budget. It is "contextual" because we utilize user context information to enable finer-grained learning and decision making. To solve this problem and optimize the edge computing performance, we propose SEEN, a Spatial-temporal Edge sErvice placemeNt algorithm. Furthermore, SEEN is extended to scenarios with overlapping service coverage by incorporating a disjunctively constrained knapsack problem. In both cases, we prove that our algorithm achieves a sublinear regret bound when it is compared to an oracle algorithm that knows the exact benefit information. Simulations are carried out on a real-world dataset, whose results show that SEEN significantly outperforms benchmark solutions

    Budget-constrained Edge Service Provisioning with Demand Estimation via Bandit Learning

    Full text link
    Shared edge computing platforms, which enable Application Service Providers (ASPs) to deploy applications in close proximity to mobile users are providing ultra-low latency and location-awareness to a rich portfolio of services. Though ubiquitous edge service provisioning, i.e., deploying the application at all possible edge sites, is always preferable, it is impractical due to often limited operational budget of ASPs. In this case, an ASP has to cautiously decide where to deploy the edge service and how much budget it is willing to use. A central issue here is that the service demand received by each edge site, which is the key factor of deploying benefit, is unknown to ASPs a priori. What's more complicated is that this demand pattern varies temporally and spatially across geographically distributed edge sites. In this paper, we investigate an edge resource rental problem where the ASP learns service demand patterns for individual edge sites while renting computation resource at these sites to host its applications for edge service provisioning. An online algorithm, called Context-aware Online Edge Resource Rental (COERR), is proposed based on the framework of Contextual Combinatorial Multi-armed Bandit (CC-MAB). COERR observes side-information (context) to learn the demand patterns of edge sites and decides rental decisions (including where to rent the computation resource and how much to rent) to maximize ASP's utility given a limited budget. COERR provides a provable performance achieving sublinear regret compared to an Oracle algorithm that knows exactly the expected service demand of edge sites. Experiments are carried out on a real-world dataset and the results show that COERR significantly outperforms other benchmarks
    • …
    corecore