142 research outputs found
A Survey of Deep Learning for Data Caching in Edge Network
The concept of edge caching provision in emerging 5G and beyond mobile
networks is a promising method to deal both with the traffic congestion problem
in the core network as well as reducing latency to access popular content. In
that respect end user demand for popular content can be satisfied by
proactively caching it at the network edge, i.e, at close proximity to the
users. In addition to model based caching schemes learning-based edge caching
optimizations has recently attracted significant attention and the aim
hereafter is to capture these recent advances for both model based and data
driven techniques in the area of proactive caching. This paper summarizes the
utilization of deep learning for data caching in edge network. We first outline
the typical research topics in content caching and formulate a taxonomy based
on network hierarchical structure. Then, a number of key types of deep learning
algorithms are presented, ranging from supervised learning to unsupervised
learning as well as reinforcement learning. Furthermore, a comparison of
state-of-the-art literature is provided from the aspects of caching topics and
deep learning methods. Finally, we discuss research challenges and future
directions of applying deep learning for cachin
ViT-CAT: Parallel Vision Transformers with Cross Attention Fusion for Popularity Prediction in MEC Networks
Mobile Edge Caching (MEC) is a revolutionary technology for the Sixth
Generation (6G) of wireless networks with the promise to significantly reduce
users' latency via offering storage capacities at the edge of the network. The
efficiency of the MEC network, however, critically depends on its ability to
dynamically predict/update the storage of caching nodes with the top-K popular
contents. Conventional statistical caching schemes are not robust to the
time-variant nature of the underlying pattern of content requests, resulting in
a surge of interest in using Deep Neural Networks (DNNs) for time-series
popularity prediction in MEC networks. However, existing DNN models within the
context of MEC fail to simultaneously capture both temporal correlations of
historical request patterns and the dependencies between multiple contents.
This necessitates an urgent quest to develop and design a new and innovative
popularity prediction architecture to tackle this critical challenge. The paper
addresses this gap by proposing a novel hybrid caching framework based on the
attention mechanism. Referred to as the parallel Vision Transformers with Cross
Attention (ViT-CAT) Fusion, the proposed architecture consists of two parallel
ViT networks, one for collecting temporal correlation, and the other for
capturing dependencies between different contents. Followed by a Cross
Attention (CA) module as the Fusion Center (FC), the proposed ViT-CAT is
capable of learning the mutual information between temporal and spatial
correlations, as well, resulting in improving the classification accuracy, and
decreasing the model's complexity about 8 times. Based on the simulation
results, the proposed ViT-CAT architecture outperforms its counterparts across
the classification accuracy, complexity, and cache-hit ratio
A proactive mobile edge cache policy based on the prediction by partial matching
The proactive caching has been an emerging approach to cost-effectively boost the network capacity and reduce access latency. While the performance of which extremely relies on the content prediction. Therefore, in this paper, a proactive cache policy is proposed in a distributed manner considering the prediction of the content popularity and user location to minimise the latency and maximise the cache hit rate. Here, a backpropagation neural network is applied to predict the content popularity, and prediction by partial matching is chosen to predict the user location. The simulation results reveal our proposed cache policy is around 27%-60% improved in the cache hit ratio and 14%-60% reduced in the average latency, compared with the two conventional reactive policies, i.e., LFU and LRU policies
From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey
Context data is in demand more than ever with the rapid increase in the
development of many context-aware Internet of Things applications. Research in
context and context-awareness is being conducted to broaden its applicability
in light of many practical and technical challenges. One of the challenges is
improving performance when responding to large number of context queries.
Context Management Platforms that infer and deliver context to applications
measure this problem using Quality of Service (QoS) parameters. Although
caching is a proven way to improve QoS, transiency of context and features such
as variability, heterogeneity of context queries pose an additional real-time
cost management problem. This paper presents a critical survey of
state-of-the-art in adaptive data caching with the objective of developing a
body of knowledge in cost- and performance-efficient adaptive caching
strategies. We comprehensively survey a large number of research publications
and evaluate, compare, and contrast different techniques, policies, approaches,
and schemes in adaptive caching. Our critical analysis is motivated by the
focus on adaptively caching context as a core research problem. A formal
definition for adaptive context caching is then proposed, followed by
identified features and requirements of a well-designed, objective optimal
adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys
Journal at this time of publishing in arxiv.or
Optimization of vehicular networks in smart cities: from agile optimization to learnheuristics and simheuristics
Vehicular ad hoc networks (VANETs) are a fundamental component of intelligent transportation systems in smart cities. With the support of open and real-time data, these networks of inter-connected vehicles constitute an ‘Internet of vehicles’ with the potential to significantly enhance citizens’ mobility and last-mile delivery in urban, peri-urban, and metropolitan areas. However, the proper coordination and logistics of VANETs raise a number of optimization challenges that need to be solved. After reviewing the state of the art on the concepts of VANET optimization and open data in smart cities, this paper discusses some of the most relevant optimization challenges in this area. Since most of the optimization problems are related to the need for real-time solutions or to the consideration of uncertainty and dynamic environments, the paper also discusses how some VANET challenges can be addressed with the use of agile optimization algorithms and the combination of metaheuristics with simulation and machine learning methods. The paper also offers a numerical analysis that measures the impact of using these optimization techniques in some related problems. Our numerical analysis, based on real data from Open Data Barcelona, demonstrates that the constructive heuristic outperforms the random scenario in the CDP combined with vehicular networks, resulting in maximizing the minimum distance between facilities while meeting capacity requirements with the fewest facilities.Peer ReviewedPostprint (published version
Edge Computing for Internet of Things
The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond
- …