Autonomous path selection of unmanned aerial vehicle in dynamic environment using reinforcement learning

Abstract

The Unmanned Aerial Vehicle (UAV) is an emerging area within the aviation industry. Currently, fully autonomous UAV operations in real-world scenarios are rare due to low technology readiness and a lack of trust. However, Artificial Intelligence (AI) offers powerful tools to adapt to changing conditions and handle complex perceptions. In autonomous vehicles, automotive self-driving technologies have made significant advances. To enhance the level of autonomy in aviation, it is beneficial to analyze these frameworks and extend autonomous driving principles to autonomous flying. This research introduces a novel solution for ensuring safe navigation in UAVs by adopting the concept of autonomous lane or path selection strategies used in cars. The approach employs deep reinforcement learning (DRL) for high-level decision-making in selecting the appropriate path generated by various established algorithms that consider different scenarios. Specifically, the Interfered Fluid Dynamical System (IFDS) \cite{IFDS_OG} is utilized for guidance and the PID for the flight control system. The UAV can choose between global and local paths and determine the appropriate speed for following these paths. This proposed framework lays the foundation for future research into practical and safe navigation strategies for UAVs.AIAA SCITECH 2025 Foru

Similar works

This paper was published in CERES Research Repository (Cranfield Univ.).

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: http://creativecommons.org/licenses/by/4.0/