225 research outputs found
Joint content placement and storage allocation based on federated learning in F-RANs
Funding: This work was supported in part by Innovation Project of the Common Key Technology of Chongqing Science and Technology Industry (cstc2018jcyjAX0383), the special fund of Chongqing key laboratory (CSTC), and the Funding of CQUPT (A2016-83, GJJY19-2-23, A2020-270).Peer reviewedPublisher PD
Mobility-Aware Cooperative Caching in Vehicular Edge Computing Based on Asynchronous Federated and Deep Reinforcement Learning
The vehicular edge computing (VEC) can cache contents in different RSUs at
the network edge to support the real-time vehicular applications. In VEC, owing
to the high-mobility characteristics of vehicles, it is necessary to cache the
user data in advance and learn the most popular and interesting contents for
vehicular users. Since user data usually contains privacy information, users
are reluctant to share their data with others. To solve this problem,
traditional federated learning (FL) needs to update the global model
synchronously through aggregating all users' local models to protect users'
privacy. However, vehicles may frequently drive out of the coverage area of the
VEC before they achieve their local model trainings and thus the local models
cannot be uploaded as expected, which would reduce the accuracy of the global
model. In addition, the caching capacity of the local RSU is limited and the
popular contents are diverse, thus the size of the predicted popular contents
usually exceeds the cache capacity of the local RSU. Hence, the VEC should
cache the predicted popular contents in different RSUs while considering the
content transmission delay. In this paper, we consider the mobility of vehicles
and propose a cooperative Caching scheme in the VEC based on Asynchronous
Federated and deep Reinforcement learning (CAFR). We first consider the
mobility of vehicles and propose an asynchronous FL algorithm to obtain an
accurate global model, and then propose an algorithm to predict the popular
contents based on the global model. In addition, we consider the mobility of
vehicles and propose a deep reinforcement learning algorithm to obtain the
optimal cooperative caching location for the predicted popular contents in
order to optimize the content transmission delay. Extensive experimental
results have demonstrated that the CAFR scheme outperforms other baseline
caching schemes.Comment: This paper has been submitted to IEEE Journal of Selected Topics in
Signal Processin
Federated Learning Based Proactive Content Caching in Edge Computing
This is the author accepted manuscript. the final version is available from IEEE via the DOI in this recordContent caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.Engineering and Physical Sciences Research Council (EPSRC)National Key Research and Development Program of ChinaNational Natural Science Foundation of ChinaEuropean Union Seventh Framework Programm
Asynchronous Federated Learning Based Mobility-aware Caching in Vehicular Edge Computing
Vehicular edge computing (VEC) is a promising technology to support real-time
applications through caching the contents in the roadside units (RSUs), thus
vehicles can fetch the contents requested by vehicular users (VUs) from the RSU
within short time. The capacity of the RSU is limited and the contents
requested by VUs change frequently due to the high-mobility characteristics of
vehicles, thus it is essential to predict the most popular contents and cache
them in the RSU in advance. The RSU can train model based on the VUs' data to
effectively predict the popular contents. However, VUs are often reluctant to
share their data with others due to the personal privacy. Federated learning
(FL) allows each vehicle to train the local model based on VUs' data, and
upload the local model to the RSU instead of data to update the global model,
and thus VUs' privacy information can be protected. The traditional synchronous
FL must wait all vehicles to complete training and upload their local models
for global model updating, which would cause a long time to train global model.
The asynchronous FL updates the global model in time once a vehicle's local
model is received. However, the vehicles with different staying time have
different impacts to achieve the accurate global model. In this paper, we
consider the vehicle mobility and propose an Asynchronous FL based
Mobility-aware Edge Caching (AFMC) scheme to obtain an accurate global model,
and then propose an algorithm to predict the popular contents based on the
global model. Experimental results show that AFMC outperforms other baseline
caching schemes.Comment: This paper has been submitted to The 14th International Conference on
Wireless Communications and Signal Processing (WCSP 2022
Mobility-Aware Proactive Edge Caching for Connected Vehicles Using Federated Learning
Content Caching at the edge of vehicular networks has been considered as a promising technology to satisfy the increasing demands of computation-intensive and latency-sensitive vehicular applications for intelligent transportation. The existing content caching schemes, when used in vehicular networks, face two distinct challenges: 1) Vehicles connected to an edge server keep moving, making the content popularity varying and hard to predict. 2) Cached content is easily out-of-date since each connected vehicle stays in the area of an edge server for a short duration. To address these challenges, we propose a Mobility-aware Proactive edge Caching scheme based on Federated learning (MPCF). This new scheme enables multiple vehicles to collaboratively learn a global model for predicting content popularity with the private training data distributed on local vehicles. MPCF also employs a Context-aware Adversarial AutoEncoder to predict the highly dynamic content popularity. Besides, MPCF integrates a mobility-aware cache replacement policy, which allows the network edges to add/evict contents in response to the mobility patterns and preferences of vehicles. MPCF can greatly improve cache performance, effectively protect users' privacy and significantly reduce communication costs. Experimental results demonstrate that MPCF outperforms other baseline caching schemes in terms of the cache hit ratio in vehicular edge networks
- …