1,534 research outputs found
Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey
The ongoing amalgamation of UAV and ML techniques is creating a significant
synergy and empowering UAVs with unprecedented intelligence and autonomy. This
survey aims to provide a timely and comprehensive overview of ML techniques
used in UAV operations and communications and identify the potential growth
areas and research gaps. We emphasise the four key components of UAV operations
and communications to which ML can significantly contribute, namely, perception
and feature extraction, feature interpretation and regeneration, trajectory and
mission planning, and aerodynamic control and operation. We classify the latest
popular ML tools based on their applications to the four components and conduct
gap analyses. This survey also takes a step forward by pointing out significant
challenges in the upcoming realm of ML-aided automated UAV operations and
communications. It is revealed that different ML techniques dominate the
applications to the four key modules of UAV operations and communications.
While there is an increasing trend of cross-module designs, little effort has
been devoted to an end-to-end ML framework, from perception and feature
extraction to aerodynamic control and operation. It is also unveiled that the
reliability and trust of ML in UAV operations and applications require
significant attention before full automation of UAVs and potential cooperation
between UAVs and humans come to fruition.Comment: 36 pages, 304 references, 19 Figure
Uav-assisted data collection in wireless sensor networks: A comprehensive survey
Wireless sensor networks (WSNs) are usually deployed to different areas of interest to sense phenomena, process sensed data, and take actions accordingly. The networks are integrated with many advanced technologies to be able to fulfill their tasks that is becoming more and more complicated. These networks tend to connect to multimedia networks and to process huge data over long distances. Due to the limited resources of static sensor nodes, WSNs need to cooperate with mobile robots such as unmanned ground vehicles (UGVs), or unmanned aerial vehicles (UAVs) in their developments. The mobile devices show their maneuverability, computational and energystorage abilities to support WSNs in multimedia networks. This paper addresses a comprehensive survey of almost scenarios utilizing UAVs and UGVs with strogly emphasising on UAVs for data collection in WSNs. Either UGVs or UAVs can collect data from static sensor nodes in the monitoring fields. UAVs can either work alone to collect data or can cooperate with other UAVs to increase their coverage in their working fields. Different techniques to support the UAVs are addressed in this survey. Communication links, control algorithms, network structures and different mechanisms are provided and compared. Energy consumption or transportation cost for such scenarios are considered. Opening issues and challenges are provided and suggested for the future developments
Bayesian Optimization Enhanced Deep Reinforcement Learning for Trajectory Planning and Network Formation in Multi-UAV Networks
In this paper, we employ multiple UAVs coordinated by a base station (BS) to
help the ground users (GUs) to offload their sensing data. Different UAVs can
adapt their trajectories and network formation to expedite data transmissions
via multi-hop relaying. The trajectory planning aims to collect all GUs' data,
while the UAVs' network formation optimizes the multi-hop UAV network topology
to minimize the energy consumption and transmission delay. The joint network
formation and trajectory optimization is solved by a two-step iterative
approach. Firstly, we devise the adaptive network formation scheme by using a
heuristic algorithm to balance the UAVs' energy consumption and data queue
size. Then, with the fixed network formation, the UAVs' trajectories are
further optimized by using multi-agent deep reinforcement learning without
knowing the GUs' traffic demands and spatial distribution. To improve the
learning efficiency, we further employ Bayesian optimization to estimate the
UAVs' flying decisions based on historical trajectory points. This helps avoid
inefficient action explorations and improves the convergence rate in the model
training. The simulation results reveal close spatial-temporal couplings
between the UAVs' trajectory planning and network formation. Compared with
several baselines, our solution can better exploit the UAVs' cooperation in
data offloading, thus improving energy efficiency and delay performance.Comment: 15 pages, 10 figures, 2 algorithm
A Learning-Based Trajectory Planning of Multiple UAVs for AoI Minimization in IoT Networks
Many emerging Internet of Things (IoT) applications rely on information
collected by sensor nodes where the freshness of information is an important
criterion. \textit{Age of Information} (AoI) is a metric that quantifies
information timeliness, i.e., the freshness of the received information or
status update. This work considers a setup of deployed sensors in an IoT
network, where multiple unmanned aerial vehicles (UAVs) serve as mobile relay
nodes between the sensors and the base station. We formulate an optimization
problem to jointly plan the UAVs' trajectory, while minimizing the AoI of the
received messages. This ensures that the received information at the base
station is as fresh as possible. The complex optimization problem is
efficiently solved using a deep reinforcement learning (DRL) algorithm. In
particular, we propose a deep Q-network, which works as a function
approximation to estimate the state-action value function. The proposed scheme
is quick to converge and results in a lower AoI than the random walk scheme.
Our proposed algorithm reduces the average age by approximately and
requires down to less energy when compared to the baseline scheme
Meta-Reinforcement Learning for Timely and Energy-efficient Data Collection in Solar-powered UAV-assisted IoT Networks
Unmanned aerial vehicles (UAVs) have the potential to greatly aid Internet of
Things (IoT) networks in mission-critical data collection, thanks to their
flexibility and cost-effectiveness. However, challenges arise due to the UAV's
limited onboard energy and the unpredictable status updates from sensor nodes
(SNs), which impact the freshness of collected data. In this paper, we
investigate the energy-efficient and timely data collection in IoT networks
through the use of a solar-powered UAV. Each SN generates status updates at
stochastic intervals, while the UAV collects and subsequently transmits these
status updates to a central data center. Furthermore, the UAV harnesses solar
energy from the environment to maintain its energy level above a predetermined
threshold. To minimize both the average age of information (AoI) for SNs and
the energy consumption of the UAV, we jointly optimize the UAV trajectory, SN
scheduling, and offloading strategy. Then, we formulate this problem as a
Markov decision process (MDP) and propose a meta-reinforcement learning
algorithm to enhance the generalization capability. Specifically, the
compound-action deep reinforcement learning (CADRL) algorithm is proposed to
handle the discrete decisions related to SN scheduling and the UAV's offloading
policy, as well as the continuous control of UAV flight. Moreover, we
incorporate meta-learning into CADRL to improve the adaptability of the learned
policy to new tasks. To validate the effectiveness of our proposed algorithms,
we conduct extensive simulations and demonstrate their superiority over other
baseline algorithms
- …