2,160 research outputs found

    Mobility-Aware Content Placement for Device-to-Device Caching Systems

    Full text link
    User mobility has a large effect on optimal content placement in D2D caching networks. Since a typical user can communicate neighboring users who stay in the D2D communication area of the typical user, the optimal content placement should be changed according to the user mobility. Under consideration of randomness of incoming and outgoing users, we formulate an optimization problem to minimize the average data load of a BS. It is proved that minimization of the average data load of a BS can be transformed to maximization of a monotonic submodular function with a matroid constraint, for which a greedy algorithm can find near-optimal solutions. Moreover, when motions of neighboring users are rapid, the optimal content placement is derived in closed-form, aided by reasonable approximation and relaxation. In the high mobility regime, the optimal content placement is shown to cache partial amounts of the most popular contents

    Exploiting Mobility in Cache-Assisted D2D Networks: Performance Analysis and Optimization

    Full text link
    Caching popular content at mobile devices, accompanied by device-to-device (D2D) communications, is one promising technology for effective mobile content delivery. User mobility is an important factor when investigating such networks, which unfortunately was largely ignored in most previous works. Preliminary studies have been carried out, but the effect of mobility on the caching performance has not been fully understood. In this paper, by explicitly considering users' contact and inter-contact durations via an alternating renewal process, we first investigate the effect of mobility with a given cache placement. A tractable expression of the data offloading ratio, i.e., the proportion of requested data that can be delivered via D2D links, is derived, which is proved to be increasing with the user moving speed. The analytical results are then used to develop an effective mobility-aware caching strategy to maximize the data offloading ratio. Simulation results are provided to confirm the accuracy of the analytical results and also validate the effect of user mobility. Performance gains of the proposed mobility-aware caching strategy are demonstrated with both stochastic models and real-life data sets. It is observed that the information of the contact durations is critical to design cache placement, especially when they are relatively short or comparable to the inter-contact durations.Comment: 31 pages, 9 figures, to appear in IEEE Transactions on Wireless Communication

    Incentive Mechanism Design for Cache-Assisted D2D Communications: A Mobility-Aware Approach

    Full text link
    Caching popular contents at mobile devices, assisted by device-to-device (D2D) communications, is considered as a promising technique for mobile content delivery. It can effectively reduce backhaul traffic and service cost, as well as improving the spectrum efficiency. However, due to the selfishness of mobile users, incentive mechanisms will be needed to motivate device caching. In this paper, we investigate incentive mechanism design in cache-assisted D2D networks, taking advantage of the user mobility information. An inter-contact model is adopted to capture the average time between two consecutive contacts of each device pair. A Stackelberg game is formulated, where each user plays as a follower aiming at maximizing its own utility and the mobile network operator (MNO) plays as a leader aiming at minimizing the cost. Assuming that user responses can be predicted by the MNO, a cost minimization problem is formulated. Since this problem is NP-hard, we reformulate it as a non-negative submodular maximization problem and develop (14+ϵ)(\frac{1}{4+\epsilon})-approximation local search algorithm to solve it. In the simulation, we demonstrate that the local search algorithm provides near optimal performance. By comparing with other caching strategies, we validate the effectiveness of the proposed incentive-based mobility-aware caching strategy.Comment: 5 pages, 3 figures, accepted to IEEE SPAWC, Sapporo, Japan, July 201

    A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    Full text link
    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks including definition, architecture and advantages. Next, a comprehensive survey of issues on computing, caching and communication techniques at the network edge is presented respectively. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks such as cloud technology, SDN/NFV and smart devices are discussed. Finally, open research challenges and future directions are presented as well

    Mobility-Aware Caching for Content-Centric Wireless Networks: Modeling and Methodology

    Full text link
    As mobile services are shifting from "connection-centric" communications to "content-centric" communications, content-centric wireless networking emerges as a promising paradigm to evolve the current network architecture. Caching popular content at the wireless edge, including base stations (BSs) and user terminals (UTs), provides an effective approach to alleviate the heavy burden on backhaul links, as well as lowering delays and deployment costs. In contrast to wired networks, a unique characteristic of content-centric wireless networks (CCWNs) is the mobility of mobile users. While it has rarely been considered by existing works in caching design, user mobility contains various helpful side information that can be exploited to improve caching efficiency at both BSs and UTs. In this paper, we present a general framework on mobility-aware caching in CCWNs. Key properties of user mobility patterns that are useful for content caching will be firstly identified, and then different design methodologies for mobility-aware caching will be proposed. Moreover, two design examples will be provided to illustrate the proposed framework in details, and interesting future research directions will be identified.Comment: 16 pages, 5 figures, to appear in IEEE Communications Magazin

    Cost-optimal caching for D2D networks with user mobility: Modeling, analysis, and computational approaches

    Full text link
    Caching popular files at user equipments (UEs) provides an effective way to alleviate the burden of the backhaul networks. Generally, popularity-based caching is not a system-wide optimal strategy, especially for user mobility scenarios. Motivated by this observation, we consider optimal caching with presence of mobility. A cost-optimal caching problem (COCP) for device-to-device (D2D) networks is modelled, in which the impact of user mobility, cache size, and total number of encoded segments are all accounted for. Compared with the related studies, our investigation guarantees that the collected segments are non-overlapping, takes into account the cost of downloading from the network, and provides a rigorous problem complexity analysis. The hardness of the problem is proved via a reduction from the satisfiability problem. Next, a lower-bounding function of the objective function is derived. By the function, an approximation of COCP (ACOCP) achieving linearization is obtained, which features two advantages. First, the ACOCP approach can use an off-the-shelf integer linear programming algorithm to obtain the global optimal solution, and it can effectively deliver solutions for small-scale and mediumscale system scenarios. Second, and more importantly, based on the ACOCP approach, one can derive the lower bound of global optimum of COCP, thus enabling performance benchmarking of any suboptimal algorithm. To tackle large scenarios with low complexity, we first prove that the optimal caching placement of one user, giving other users' caching placements, can be derived in polynomial time. Then, based on this proof, a mobility aware user-by-user (MAUU) algorithm is developed. Simulation results verify the effectivenesses of the two approaches by comparing them to the lower bound of global optimum and conventional caching algorithms

    Recent Advances in Fog Radio Access Networks: Performance Analysis and Radio Resource Allocation

    Full text link
    As a promising paradigm for the fifth generation wireless communication (5G) system, the fog radio access network (F-RAN) has been proposed as an advanced socially-aware mobile networking architecture to provide high spectral efficiency (SE) while maintaining high energy efficiency (EE) and low latency. Recent advents are advocated to the performance analysis and radio resource allocation, both of which are fundamental issues to make F-RANs successfully rollout. This article comprehensively summarizes the recent advances of the performance analysis and radio resource allocation in F-RANs. Particularly, the advanced edge cache and adaptive model selection schemes are presented to improve SE and EE under maintaining a low latency level. The radio resource allocation strategies to optimize SE and EE in F-RANs are respectively proposed. A few open issues in terms of the F-RAN based 5G architecture and the social-awareness technique are identified as well

    A Personalized Preference Learning Framework for Caching in Mobile Networks

    Full text link
    This paper comprehensively studies a content-centric mobile network based on a preference learning framework, where each mobile user is equipped with a finite-size cache. We consider a practical scenario where each user requests a content file according to its own preferences, which is motivated by the existence of heterogeneity in file preferences among different users. Under our model, we consider a single-hop-based device-to-device (D2D) content delivery protocol and characterize the average hit ratio for the following two file preference cases: the personalized file preferences and the common file preferences. By assuming that the model parameters such as user activity levels, user file preferences, and file popularity are unknown and thus need to be inferred, we present a collaborative filtering (CF)-based approach to learn these parameters. Then, we reformulate the hit ratio maximization problems into a submodular function maximization and propose two computationally efficient algorithms including a greedy approach to efficiently solve the cache allocation problems. We analyze the computational complexity of each algorithm. Moreover, we analyze the corresponding level of the approximation that our greedy algorithm can achieve compared to the optimal solution. Using a real-world dataset, we demonstrate that the proposed framework employing the personalized file preferences brings substantial gains over its counterpart for various system parameters.Comment: 21 pages, 10 figures, 1 table, to appear in the IEEE Transactions on Mobile Computin

    A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions

    Full text link
    The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: RAN, core network, and caching. We also present a general overview of 5G cellular networks composed of software defined network (SDN), network function virtualization (NFV), caching, and mobile edge computing (MEC) capable of meeting latency and other 5G requirements.Comment: Accepted in IEEE Communications Surveys and Tutorial

    Caching at the Wireless Edge: Design Aspects, Challenges and Future Directions

    Full text link
    Caching at the wireless edge is a promising way of boosting spectral efficiency and reducing energy consumption of wireless systems. These improvements are rooted in the fact that popular contents are reused, asynchronously, by many users. In this article, we first introduce methods to predict the popularity distributions and user preferences, and the impact of erroneous information. We then discuss the two aspects of caching systems, namely content placement and delivery. We expound the key differences between wired and wireless caching, and outline the differences in the system arising from where the caching takes place, e.g., at base stations, or on the wireless devices themselves. Special attention is paid to the essential limitations in wireless caching, and possible tradeoffs between spectral efficiency, energy efficiency and cache size.Comment: Published in IEEE Communications Magazin
    • …
    corecore