2,222 research outputs found
Self-Evolving Integrated Vertical Heterogeneous Networks
6G and beyond networks tend towards fully intelligent and adaptive design in
order to provide better operational agility in maintaining universal wireless
access and supporting a wide range of services and use cases while dealing with
network complexity efficiently. Such enhanced network agility will require
developing a self-evolving capability in designing both the network
architecture and resource management to intelligently utilize resources, reduce
operational costs, and achieve the coveted quality of service (QoS). To enable
this capability, the necessity of considering an integrated vertical
heterogeneous network (VHetNet) architecture appears to be inevitable due to
its high inherent agility. Moreover, employing an intelligent framework is
another crucial requirement for self-evolving networks to deal with real-time
network optimization problems. Hence, in this work, to provide a better insight
on network architecture design in support of self-evolving networks, we
highlight the merits of integrated VHetNet architecture while proposing an
intelligent framework for self-evolving integrated vertical heterogeneous
networks (SEI-VHetNets). The impact of the challenges associated with
SEI-VHetNet architecture, on network management is also studied considering a
generalized network model. Furthermore, the current literature on network
management of integrated VHetNets along with the recent advancements in
artificial intelligence (AI)/machine learning (ML) solutions are discussed.
Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are
identified. Finally, the potential future research directions for advancing the
autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table
Distributed drone base station positioning for emergency cellular networks using reinforcement learning
Due to the unpredictability of natural disasters, whenever a catastrophe happens, it is vital that not only emergency rescue teams are prepared, but also that there is a functional communication network infrastructure. Hence, in order to prevent additional losses of human lives, it is crucial that network operators are able to deploy an emergency infrastructure as fast as possible. In this sense, the deployment of an intelligent, mobile, and adaptable network, through the usage of drones—unmanned aerial vehicles—is being considered as one possible alternative for emergency situations. In this paper, an intelligent solution based on reinforcement learning is proposed in order to find the best position of multiple drone small cells (DSCs) in an emergency scenario. The proposed solution’s main goal is to maximize the amount of users covered by the system, while drones are limited by both backhaul and radio access network constraints. Results show that the proposed Q-learning solution largely outperforms all other approaches with respect to all metrics considered. Hence, intelligent DSCs are considered a good alternative in order to enable the rapid and efficient deployment of an emergency communication network
UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement Learning Approach
Autonomous deployment of unmanned aerial vehicles (UAVs) supporting
next-generation communication networks requires efficient trajectory planning
methods. We propose a new end-to-end reinforcement learning (RL) approach to
UAV-enabled data collection from Internet of Things (IoT) devices in an urban
environment. An autonomous drone is tasked with gathering data from distributed
sensor nodes subject to limited flying time and obstacle avoidance. While
previous approaches, learning and non-learning based, must perform expensive
recomputations or relearn a behavior when important scenario parameters such as
the number of sensors, sensor positions, or maximum flying time, change, we
train a double deep Q-network (DDQN) with combined experience replay to learn a
UAV control policy that generalizes over changing scenario parameters. By
exploiting a multi-layer map of the environment fed through convolutional
network layers to the agent, we show that our proposed network architecture
enables the agent to make movement decisions for a variety of scenario
parameters that balance the data collection goal with flight time efficiency
and safety constraints. Considerable advantages in learning efficiency from
using a map centered on the UAV's position over a non-centered map are also
illustrated.Comment: Code available under
https://github.com/hbayerlein/uav_data_harvesting, IEEE Global Communications
Conference (GLOBECOM) 202
Communication and Control in Collaborative UAVs: Recent Advances and Future Trends
The recent progress in unmanned aerial vehicles (UAV) technology has
significantly advanced UAV-based applications for military, civil, and
commercial domains. Nevertheless, the challenges of establishing high-speed
communication links, flexible control strategies, and developing efficient
collaborative decision-making algorithms for a swarm of UAVs limit their
autonomy, robustness, and reliability. Thus, a growing focus has been witnessed
on collaborative communication to allow a swarm of UAVs to coordinate and
communicate autonomously for the cooperative completion of tasks in a short
time with improved efficiency and reliability. This work presents a
comprehensive review of collaborative communication in a multi-UAV system. We
thoroughly discuss the characteristics of intelligent UAVs and their
communication and control requirements for autonomous collaboration and
coordination. Moreover, we review various UAV collaboration tasks, summarize
the applications of UAV swarm networks for dense urban environments and present
the use case scenarios to highlight the current developments of UAV-based
applications in various domains. Finally, we identify several exciting future
research direction that needs attention for advancing the research in
collaborative UAVs
- …