2 research outputs found

    A review of artificial intelligence applied to path planning in UAV swarms

    Get PDF
    This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/ s00521-021-06569-4This is the accepted version of: A. Puente-Castro, D. Rivero, A. Pazos, and E. Fernández-Blanco, "A review of artificial intelligence applied to path planning in UAV swarms", Neural Computing and Applications, vol. 34, pp. 153–170, 2022. https://doi.org/10.1007/s00521-021-06569-4[Abstract]: Path Planning problems with Unmanned Aerial Vehicles (UAVs) are among the most studied knowledge areas in the related literature. However, few of them have been applied to groups of UAVs. The use of swarms allows to speed up the flight time and, thus, reducing the operational costs. When combined with Artificial Intelligence (AI) algorithms, a single system or operator can control all aircraft while optimal paths for each one can be computed. In order to introduce the current situation of these AI-based systems, a review of the most novel and relevant articles was carried out. This review was performed in two steps: first, a summary of the found articles; second, a quantitative analysis of the publications found based on different factors, such as the temporal evolution or the number of articles found based on different criteria. Therefore, this review provides not only a summary of the most recent work but it gives an overview of the trend in the use of AI algorithms in UAV swarms for Path Planning problems. The AI techniques of the articles found can be separated into four main groups based on their technique: reinforcement Learning techniques, Evolutive Computing techniques, Swarm Intelligence techniques, and, Graph Neural Networks. The final results show an increase in publications in recent years and that there is a change in the predominance of the most widely used techniques.This work is supported by Instituto de Salud Carlos III, grant number PI17/01826 (Collaborative Project in Genomic Data Integration (CICLOGEN) funded by the Instituto de Salud Carlos III from the Spanish National plan for Scientific and Technical Research and Innovation 2013–2016 and the European Regional Development Funds (FEDER)—“A way to build Europe.”. This project was also supported by the General Directorate of Culture, Education and University Management of Xunta de Galicia ED431D 2017/16 and “Drug Discovery Galician Network” Ref. ED431G/01 and the “Galician Network for Colorectal Cancer Research” (Ref. ED431D 2017/23). This work was also funded by the grant for the consolidation and structuring of competitive research units (ED431C 2018/49) from the General Directorate of Culture, Education and University Management of Xunta de Galicia, and the CYTED network (PCI2018_093284) funded by the Spanish Ministry of Ministry of Innovation and Science. This project was also supported by the General Directorate of Culture, Education and University Management of Xunta de Galicia “PRACTICUM DIRECT” Ref. IN845D-2020/03.Xunta de Galicia; ED431D 2017/16Xunta de Galicia; ED431G/01Xunta de Galicia; ED431D 2017/23Xunta de Galicia; ED431C 2018/49Xunta de Galicia; IN845D-2020/0

    Q-learning Based System for Path Planning with UAV Swarms in Obstacle Environments

    Get PDF
    Path Planning methods for autonomous control of Unmanned Aerial Vehicle (UAV) swarms are on the rise because of all the advantages they bring. There are more and more scenarios where autonomous control of multiple UAVs is required. Most of these scenarios present a large number of obstacles, such as power lines or trees. If all UAVs can be operated autonomously, personnel expenses can be decreased. In addition, if their flight paths are optimal, energy consumption is reduced. This ensures that more battery time is left for other operations. In this paper, a Reinforcement Learning based system is proposed for solving this problem in environments with obstacles by making use of Q-Learning. This method allows a model, in this particular case an Artificial Neural Network, to self-adjust by learning from its mistakes and achievements. Regardless of the size of the map or the number of UAVs in the swarm, the goal of these paths is to ensure complete coverage of an area with fixed obstacles for tasks, like field prospecting. Setting goals or having any prior information aside from the provided map is not required. For experimentation, five maps of different sizes with different obstacles were used. The experiments were performed with different number of UAVs. For the calculation of the results, the number of actions taken by all UAVs to complete the task in each experiment is taken into account. The lower the number of actions, the shorter the path and the lower the energy consumption. The results are satisfactory, showing that the system obtains solutions in fewer movements the more UAVs there are. For a better presentation, these results have been compared to another state-of-the-art approach
    corecore