809 research outputs found

    Cost-optimal caching for D2D networks with user mobility: Modeling, analysis, and computational approaches

    Full text link
    Caching popular files at user equipments (UEs) provides an effective way to alleviate the burden of the backhaul networks. Generally, popularity-based caching is not a system-wide optimal strategy, especially for user mobility scenarios. Motivated by this observation, we consider optimal caching with presence of mobility. A cost-optimal caching problem (COCP) for device-to-device (D2D) networks is modelled, in which the impact of user mobility, cache size, and total number of encoded segments are all accounted for. Compared with the related studies, our investigation guarantees that the collected segments are non-overlapping, takes into account the cost of downloading from the network, and provides a rigorous problem complexity analysis. The hardness of the problem is proved via a reduction from the satisfiability problem. Next, a lower-bounding function of the objective function is derived. By the function, an approximation of COCP (ACOCP) achieving linearization is obtained, which features two advantages. First, the ACOCP approach can use an off-the-shelf integer linear programming algorithm to obtain the global optimal solution, and it can effectively deliver solutions for small-scale and mediumscale system scenarios. Second, and more importantly, based on the ACOCP approach, one can derive the lower bound of global optimum of COCP, thus enabling performance benchmarking of any suboptimal algorithm. To tackle large scenarios with low complexity, we first prove that the optimal caching placement of one user, giving other users' caching placements, can be derived in polynomial time. Then, based on this proof, a mobility aware user-by-user (MAUU) algorithm is developed. Simulation results verify the effectivenesses of the two approaches by comparing them to the lower bound of global optimum and conventional caching algorithms

    Generalized Sparse and Low-Rank Optimization for Ultra-Dense Networks

    Full text link
    Ultra-dense network (UDN) is a promising technology to further evolve wireless networks and meet the diverse performance requirements of 5G networks. With abundant access points, each with communication, computation and storage resources, UDN brings unprecedented benefits, including significant improvement in network spectral efficiency and energy efficiency, greatly reduced latency to enable novel mobile applications, and the capability of providing massive access for Internet of Things (IoT) devices. However, such great promises come with formidable research challenges. To design and operate such complex networks with various types of resources, efficient and innovative methodologies will be needed. This motivates the recent introduction of highly structured and generalizable models for network optimization. In this article, we present some recently proposed large-scale sparse and low-rank frameworks for optimizing UDNs, supported by various motivating applications. A special attention is paid on algorithmic approaches to deal with nonconvex objective functions and constraints, as well as computational scalability.Comment: This paper has been accepted by IEEE Communication Magazine, Special Issue on Heterogeneous Ultra Dense Network

    Five Disruptive Technology Directions for 5G

    Full text link
    New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain

    Living in a PIT-less World: A Case Against Stateful Forwarding in Content-Centric Networking

    Full text link
    Information-Centric Networking (ICN) is a recent paradigm that claims to mitigate some limitations of the current IP-based Internet architecture. The centerpiece of ICN is named and addressable content, rather than hosts or interfaces. Content-Centric Networking (CCN) is a prominent ICN instance that shares the fundamental architectural design with its equally popular academic sibling Named-Data Networking (NDN). CCN eschews source addresses and creates one-time virtual circuits for every content request (called an interest). As an interest is forwarded it creates state in intervening routers and the requested content back is delivered over the reverse path using that state. Although a stateful forwarding plane might be beneficial in terms of efficiency, and resilience to certain types of attacks, this has not been decisively proven via realistic experiments. Since keeping per-interest state complicates router operations and makes the infrastructure susceptible to router state exhaustion attacks (e.g., there is currently no effective defense against interest flooding attacks), the value of the stateful forwarding plane in CCN should be re-examined. In this paper, we explore supposed benefits and various problems of the stateful forwarding plane. We then argue that its benefits are uncertain at best and it should not be a mandatory CCN feature. To this end, we propose a new stateless architecture for CCN that provides nearly all functionality of the stateful design without its headaches. We analyze performance and resource requirements of the proposed architecture, via experiments.Comment: 10 pages, 6 figure

    Stochastic Design and Analysis of Wireless Cloud Caching Networks

    Full text link
    This paper develops a stochastic geometry-based approach for the modeling, analysis, and optimization of wireless cloud caching networks comprised of multiple-antenna radio units (RUs) inside clouds. We consider the Matern cluster process to model RUs and the probabilistic content placement to cache files in RUs. Accordingly, we study the exact hit probability for a user of interest for two strategies; closest selection, where the user is served by the closest RU that has its requested file, and best selection, where the serving RU having the requested file provides the maximum instantaneous received power at the user. As key steps for the analyses, the Laplace transform of out of cloud interference, the desired link distance distribution in the closest selection, and the desired link received power distribution in the best selection are derived. Also, we approximate the derived exact hit probabilities for both the closest and the best selections in such a way that the related objective functions for the content caching design of the network can lead to tractable concave optimization problems. Solving the optimization problems, we propose algorithms to efficiently find their optimal content placements. Finally, we investigate the impact of different parameters such as the number of antennas and the cache memory size on the caching performance

    Caching at the Wireless Edge: Design Aspects, Challenges and Future Directions

    Full text link
    Caching at the wireless edge is a promising way of boosting spectral efficiency and reducing energy consumption of wireless systems. These improvements are rooted in the fact that popular contents are reused, asynchronously, by many users. In this article, we first introduce methods to predict the popularity distributions and user preferences, and the impact of erroneous information. We then discuss the two aspects of caching systems, namely content placement and delivery. We expound the key differences between wired and wireless caching, and outline the differences in the system arising from where the caching takes place, e.g., at base stations, or on the wireless devices themselves. Special attention is paid to the essential limitations in wireless caching, and possible tradeoffs between spectral efficiency, energy efficiency and cache size.Comment: Published in IEEE Communications Magazin

    Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users

    Full text link
    In this paper, the problem of proactive caching is studied for cloud radio access networks (CRANs). In the studied model, the baseband units (BBUs) can predict the content request distribution and mobility pattern of each user, determine which content to cache at remote radio heads and BBUs. This problem is formulated as an optimization problem which jointly incorporates backhaul and fronthaul loads and content caching. To solve this problem, an algorithm that combines the machine learning framework of echo state networks with sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs can predict each user's content request distribution and mobility pattern while having only limited information on the network's and user's state. In order to predict each user's periodic mobility pattern with minimal complexity, the memory capacity of the corresponding ESN is derived for a periodic input. This memory capacity is shown to be able to record the maximum amount of user information for the proposed ESN model. Then, a sublinear algorithm is proposed to determine which content to cache while using limited content request distribution samples. Simulation results using real data from Youku and the Beijing University of Posts and Telecommunications show that the proposed approach yields significant gains, in terms of sum effective capacity, that reach up to 27.8% and 30.7%, respectively, compared to random caching with clustering and random caching without clustering algorithm.Comment: Accepted in the IEEE Transactions on Wireless Communication

    Cache-enabled Device-to-Device Communications: Offloading Gain and Energy Cost

    Full text link
    By caching files at users, content delivery traffic can be offloaded via device-to-device (D2D) links if a helper user is willing to transmit the cached file to the user who requests the file. In practice, the user device has limited battery capacity, and may terminate the D2D connection when its battery has little energy left. Thus, taking the battery consumption allowed by the helper users to support D2D into account introduces a reduction in the possible amount of offloading. In this paper, we investigate the relationship between offloading gain of the system and energy cost of each helper user. To this end, we introduce a user-centric protocol to control the energy cost for a helper user to transmit the file. Then, we optimize the proactive caching policy to maximize the offloading opportunity, and optimize the transmit power at each helper to maximize the offloading probability. Finally, we evaluate the overall amount of traffic offloaded to D2D links and evaluate the average energy consumption at each helper, with the optimized caching policy and transmit power. Simulations show that a significant amount of traffic can be offloaded even when the energy cost is kept low.Comment: A part of this work was published in IEEE WCNC 2016 with title "Energy Costs for Traffic Offloading by Cache-enabled D2D Communications

    Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues

    Full text link
    As a key technique for enabling artificial intelligence, machine learning (ML) is capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition, both industry and the research community have advocated the applications of ML in wireless communication. This paper comprehensively surveys the recent advances of the applications of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer. The applications in resource management further include power control, spectrum management, backhaul management, cache management, beamformer design and computation resource management, while ML based networking focuses on the applications in clustering, base station switching control, user association and routing. Moreover, literatures in each aspect is organized according to the adopted ML techniques. In addition, several conditions for applying ML to wireless communication are identified to help readers decide whether to use ML and which kind of ML techniques to use, and traditional approaches are also summarized together with their performance comparison with ML based approaches, based on which the motivations of surveyed literatures to adopt ML are clarified. Given the extensiveness of the research area, challenges and unresolved issues are presented to facilitate future studies, where ML based network slicing, infrastructure update to support ML based paradigms, open data sets and platforms for researchers, theoretical guidance for ML implementation and so on are discussed.Comment: 34 pages,8 figure

    Mobile Edge Caching: An Optimal Auction Approach

    Full text link
    With the explosive growth of wireless data, the sheer size of the mobile traffic is challenging the capacity of current wireless systems. To tackle this challenge, mobile edge caching has emerged as a promising paradigm recently, in which the service providers (SPs) prefetch some popular contents in advance and cache them locally at the network edge. When requested, those locally cached contents can be directly delivered to users with low latency, thus alleviating the traffic load over backhaul channels during peak hours and enhancing the quality-of-experience (QoE) of users simultaneously. Due to the limited available cache space, it makes sense for the SP to cache the most profitable contents. Nevertheless, users' true valuations of contents are their private knowledge, which is unknown to the SP in general. This information asymmetry poses a significant challenge for effective caching at the SP side. Further, the cached contents can be delivered with different quality, which needs to be chosen judiciously to balance delivery costs and user satisfaction. To tackle these difficulties, in this paper, we propose an optimal auction mechanism from the perspective of the SP. In the auction, the SP determines the cache space allocation over contents and user payments based on the users' (possibly untruthful) reports of their valuations so that the SP's expected revenue is maximized. The advocated mechanism is designed to elicit true valuations from the users (incentive compatibility) and to incentivize user participation (individual rationality). In addition, we devise a computationally efficient method for calculating the optimal cache space allocation and user payments. We further examine the optimal choice of the content delivery quality for the case with a large number of users and derive a closed-form solution to compute the optimal delivery quality
    • …
    corecore