88 research outputs found
Edge Computing for Extreme Reliability and Scalability
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
Efficient handover mechanism for radio access network slicing by exploiting distributed learning
Network slicing is identified as a fundamental architectural technology for future mobile networks since it can logically separate networks into multiple slices and provide tailored quality of service (QoS). However, the introduction of network slicing into radio access networks (RAN) can greatly increase user handover complexity in cellular networks. Specifically, both physical resource constraints on base stations (BSs) and logical connection constraints on network slices (NSs) should be considered when making a handover decision. Moreover, various service types call for an intelligent handover scheme to guarantee the diversified QoS requirements. As such, in this paper, a multi-agent reinforcement LEarning based Smart handover Scheme, named LESS, is proposed, with the purpose of minimizing handover cost while maintaining user QoS. Due to the large action space introduced by multiple users and the data sparsity caused by user mobility, conventional reinforcement learning algorithms cannot be applied directly. To solve these difficulties, LESS exploits the unique characteristics of slicing in designing two algorithms: 1) LESS-DL, a distributed Q-learning algorithm to make handover decisions with reduced action space but without compromising handover performance; 2) LESS-QVU, a modified Q-value update algorithm which exploits slice traffic similarity to improve the accuracy of Q-value evaluation with limited data. Thus, LESS uses LESS-DL to choose the target BS and NS when a handover occurs, while Q-values are updated by using LESS-QVU. The convergence of LESS is theoretically proved in this paper. Simulation results show that LESS can significantly improve network performance. In more detail, the number of handovers, handover cost and outage probability are reduced by around 50%, 65%, and 45%, respectively, when compared with traditional methods
A Vision and Framework for the High Altitude Platform Station (HAPS) Networks of the Future
A High Altitude Platform Station (HAPS) is a network node that operates in
the stratosphere at an of altitude around 20 km and is instrumental for
providing communication services. Precipitated by technological innovations in
the areas of autonomous avionics, array antennas, solar panel efficiency
levels, and battery energy densities, and fueled by flourishing industry
ecosystems, the HAPS has emerged as an indispensable component of
next-generations of wireless networks. In this article, we provide a vision and
framework for the HAPS networks of the future supported by a comprehensive and
state-of-the-art literature review. We highlight the unrealized potential of
HAPS systems and elaborate on their unique ability to serve metropolitan areas.
The latest advancements and promising technologies in the HAPS energy and
payload systems are discussed. The integration of the emerging Reconfigurable
Smart Surface (RSS) technology in the communications payload of HAPS systems
for providing a cost-effective deployment is proposed. A detailed overview of
the radio resource management in HAPS systems is presented along with
synergistic physical layer techniques, including Faster-Than-Nyquist (FTN)
signaling. Numerous aspects of handoff management in HAPS systems are
described. The notable contributions of Artificial Intelligence (AI) in HAPS,
including machine learning in the design, topology management, handoff, and
resource allocation aspects are emphasized. The extensive overview of the
literature we provide is crucial for substantiating our vision that depicts the
expected deployment opportunities and challenges in the next 10 years
(next-generation networks), as well as in the subsequent 10 years
(next-next-generation networks).Comment: To appear in IEEE Communications Surveys & Tutorial
Self-Evolving Integrated Vertical Heterogeneous Networks
6G and beyond networks tend towards fully intelligent and adaptive design in
order to provide better operational agility in maintaining universal wireless
access and supporting a wide range of services and use cases while dealing with
network complexity efficiently. Such enhanced network agility will require
developing a self-evolving capability in designing both the network
architecture and resource management to intelligently utilize resources, reduce
operational costs, and achieve the coveted quality of service (QoS). To enable
this capability, the necessity of considering an integrated vertical
heterogeneous network (VHetNet) architecture appears to be inevitable due to
its high inherent agility. Moreover, employing an intelligent framework is
another crucial requirement for self-evolving networks to deal with real-time
network optimization problems. Hence, in this work, to provide a better insight
on network architecture design in support of self-evolving networks, we
highlight the merits of integrated VHetNet architecture while proposing an
intelligent framework for self-evolving integrated vertical heterogeneous
networks (SEI-VHetNets). The impact of the challenges associated with
SEI-VHetNet architecture, on network management is also studied considering a
generalized network model. Furthermore, the current literature on network
management of integrated VHetNets along with the recent advancements in
artificial intelligence (AI)/machine learning (ML) solutions are discussed.
Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are
identified. Finally, the potential future research directions for advancing the
autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
New paradigms of distributed AI for improving 5G-based network systems performance
With the advent of 5G technology, there is an increasing need for efficient and effective
machine learning techniques to support a wide range of applications, from smart cities to
autonomous vehicles. The research question is whether distributed machine learning can
provide a solution to the challenges of large-scale data processing, resource allocation, and
privacy concerns in 5G networks. The thesis examines two main approaches to distributed
machine learning: split learning and federated learning. Split learning enables the separation of model training and data storage between multiple devices, while federated learning
allows for the training of a global model using decentralized data sources. The thesis investigates the performance of these approaches in terms of accuracy, communication overhead,
and privacy preservation. The findings suggest that distributed machine learning can provide a viable solution to the challenges of 5G networks, with split learning and federated
learning techniques showing promising results for spectral efficiency, resource allocation, and
privacy preservation. The thesis concludes with a discussion of future research directions
and potential applications of distributed machine learning in 5G networks.
In this thesis, we investigate four case studies of both 5G network systems and LTE
and Wifi (legacy parts). In chapter3, we implement an asynchronous federated learning
model to predict the RSSI in robot localization indoor and outdoor environments. The
proposed framework provides a good performance in terms of convergence, accuracy, and
overhead reduction. In chapter4, we transfer the deployment of the asynchronous federated
learning framework from the Wifi use case to a part of 5G networks (Network slicing),
where we use the framework to predict the slice type for rapid and automated intelligent
resource allocation. [...
Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services
This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
- …