2,668 research outputs found

    Joint Pushing and Caching with a Finite Receiver Buffer: Optimal Policies and Throughput Analysis

    Full text link
    Pushing and caching hold the promise of significantly increasing the throughput of content-centric wireless networks. However, the throughput gain of these techniques is limited by the buffer size of the receiver. To overcome this, this paper presents a Joint Pushing and Caching (JPC) method that jointly determines the contents to be pushed to, and to be removed from, the receiver buffer in each timeslot. An offline and two online JPC policies are proposed respectively based on noncausal, statistical, and causal content Request Delay Information (RDI), which predicts a user's request time for certain content. It is shown that the effective throughput of JPC is increased with the receiver buffer size and the pushing channel capacity. Furthermore, the causal feedback of user requests is found to greatly enhance the performance of online JPC without inducing much signalling overhead in practice.Comment: 6 pages, 4 figure

    Optimized mobile thin clients through a MPEG-4 BiFS semantic remote display framework

    Get PDF
    According to the thin client computing principle, the user interface is physically separated from the application logic. In practice only a viewer component is executed on the client device, rendering the display updates received from the distant application server and capturing the user interaction. Existing remote display frameworks are not optimized to encode the complex scenes of modern applications, which are composed of objects with very diverse graphical characteristics. In order to tackle this challenge, we propose to transfer to the client, in addition to the binary encoded objects, semantic information about the characteristics of each object. Through this semantic knowledge, the client is enabled to react autonomously on user input and does not have to wait for the display update from the server. Resulting in a reduction of the interaction latency and a mitigation of the bursty remote display traffic pattern, the presented framework is of particular interest in a wireless context, where the bandwidth is limited and expensive. In this paper, we describe a generic architecture of a semantic remote display framework. Furthermore, we have developed a prototype using the MPEG-4 Binary Format for Scenes to convey the semantic information to the client. We experimentally compare the bandwidth consumption of MPEG-4 BiFS with existing, non-semantic, remote display frameworks. In a text editing scenario, we realize an average reduction of 23% of the data peaks that are observed in remote display protocol traffic

    Energy Minimization in D2D-Assisted Cache-Enabled Internet of Things: A Deep Reinforcement Learning Approach

    Get PDF
    Mobile edge caching (MEC) and device-to-device (D2D) communications are two potential technologies to resolve traffic overload problems in the Internet of Things. Previous works usually investigate them separately with MEC for traffic offloading and D2D for information transmission. In this article, a joint framework consisting of MEC and cache-enabled D2D communications is proposed to minimize the energy cost of systematic traffic transmission, where file popularity and user preference are the critical criteria for small base stations (SBSs) and user devices, respectively. Under this framework, we propose a novel caching strategy, where the Markov decision process is applied to model the requesting behaviors. A novel scheme based on reinforcement learning (RL) is proposed to reveal the popularity of files as well as users' preference. In particular, a Q-learning algorithm and a deep Q-network algorithm are, respectively, applied to user devices and the SBS due to different complexities of status. To save the energy cost of systematic traffic transmission, users acquire partial traffic through D2D communications based on the cached contents and user distribution. Taking the memory limits, D2D available files, and status changing into consideration, the proposed RL algorithm enables user devices and the SBS to prefetch the optimal files while learning, which can reduce the energy cost significantly. Simulation results demonstrate the superior energy saving performance of the proposed RL-based algorithm over other existing methods under various conditions

    Reinforcement learning for proactive content caching in wireless networks

    Get PDF
    Proactive content caching (PC) at the edge of wireless networks, that is, at the base stations (BSs) and/or user equipments (UEs), is a promising strategy to successfully handle the ever-growing mobile data traffic and to improve the quality-of-service for content delivery over wireless networks. However, factors such as limitations in storage capacity, time-variations in wireless channel conditions as well as in content demand profile pose challenges that need to be addressed in order to realise the benefits of PC at the wireless edge. This thesis aims to develop PC solutions that address these challenges. We consider PC directly at UEs equipped with finite capacity cache memories. This consideration is done within the framework of a dynamic system, where mobile users randomly request contents from a non-stationary content library; new contents are added to the library over time and each content may remain in the library for a random lifetime within which it may be requested. Contents are delivered through wireless channels with time-varying quality, and any time contents are transmitted, a transmission cost associated with the number of bits downloaded and the channel quality of the receiving user(s) at that time is incurred by the system. We formulate each considered problem as a Markov decision process with the objective of minimising the long term expected average cost on the system. We then use reinforcement learning (RL) to solve this highly challenging problem with a prohibitively large state and action spaces. In particular, we employ policy approximation techniques for compact representation of complex policy structures, and policy gradient RL methods to train the system. In a single-user problem setting that we consider, we show the optimality of a threshold-based PC scheme that is adaptive to system dynamics. We use this result to characterise and design a multicast-aware PC scheme, based on deep RL framework, when we consider a multi-user problem setting. We perform extensive numerical simulations of the schemes we propose. Our results show not only significant improvements against the state-of-the-art reactive content delivery approaches, but also near-optimality of the proposed RL solutions based on comparisons with some lower bounds.Open Acces
    • …
    corecore