Rendezvous Planning for Multiple Autonomous Underwater Vehicles using a Markov Decision Process

Abstract

Multiple Autonomous Underwater Vehicles (AUVs) are a potential alternative to conventional large manned vessels for mine countermeasure (MCM) operations. Online mission planning for cooperative multi-AUV network often relies or predefined contingency on reactive methods and do not deliver an optimal end-goal performance. Markov Decision Process (MDP) is a decision-making framework that allows an optimal solution, taking into account future decision estimates, rather than having a myopic view. However, most real-world problems are too complex to be represented by this framework. We deal with the complexity problem by abstracting the MCM scenario with a reduced state and action space, yet retaining the information that defines the goal and constraints coming from the application. Another critical part of the model is the ability of the vehicles to communicate and enable a cooperative mission. We use the Rendezvous Point (RP) method. The RP schedules meeting points for the vehicles throughput the mission. Our model provides an optimal action selection solution for the multi-AUV MCM problem. The computation of the mission plan is performed in the order of minutes. This quick execution demonstrates the model is feasible for real-time applications

Similar works

This paper was published in UCL Discovery.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.