AID-RL: Active information-directed reinforcement learning for autonomous source seeking and estimation

Abstract

This paper proposes an active information-directed reinforcement learning (AID-RL) framework for autonomous source seeking and estimation problem. Source seeking requires the search agent to move towards the true source, and source estimation demands the agent to maintain and update its knowledge regarding the source properties such as release rate and source position. These two objectives give rise to the newly developed framework, namely, dual control for exploration and exploitation. In this paper, the greedy RL forms an exploitation search strategy that navigates the agent to the source position, while the information-directed search commands the agent to explore most informative positions to reduce belief uncertainty. Extensive results are presented using a high-fidelity dataset for autonomous search, which validates the effectiveness of the proposed AID-RL and highlights the importance of active exploration in improving sampling efficiency and search performance

    Similar works