2,654 research outputs found

    Big Data Meets Telcos: A Proactive Caching Perspective

    Full text link
    Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: velocity, voracity, volume and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platform and the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4 Gbyte of storage size (87% of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.Comment: 8 pages, 5 figure

    Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching

    Full text link
    Proactive caching is an effective way to alleviate peak-hour traffic congestion by prefetching popular contents at the wireless network edge. To maximize the caching efficiency requires the knowledge of content popularity profile, which however is often unavailable in advance. In this paper, we first propose a new linear prediction model, named grouped linear model (GLM) to estimate the future content requests based on historical data. Unlike many existing works that assumed the static content popularity profile, our model can adapt to the temporal variation of the content popularity in practical systems due to the arrival of new contents and dynamics of user preference. Based on the predicted content requests, we then propose a reinforcement learning approach with model-free acceleration (RLMA) for online cache replacement by taking into account both the cache hits and replacement cost. This approach accelerates the learning process in non-stationary environment by generating imaginary samples for Q-value updates. Numerical results based on real-world traces show that the proposed prediction and learning based online caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
    corecore