Current reinforcement learning algorithms struggle in sparse and complex
environments, most notably in long-horizon manipulation tasks entailing a
plethora of different sequences. In this work, we propose the Intrinsically
Guided Exploration from Large Language Models (IGE-LLMs) framework. By
leveraging LLMs as an assistive intrinsic reward, IGE-LLMs guides the
exploratory process in reinforcement learning to address intricate long-horizon
with sparse rewards robotic manipulation tasks. We evaluate our framework and
related intrinsic learning methods in an environment challenged with
exploration, and a complex robotic manipulation task challenged by both
exploration and long-horizons. Results show IGE-LLMs (i) exhibit notably higher
performance over related intrinsic methods and the direct use of LLMs in
decision-making, (ii) can be combined and complement existing learning methods
highlighting its modularity, (iii) are fairly insensitive to different
intrinsic scaling parameters, and (iv) maintain robustness against increased
levels of uncertainty and horizons.Comment: 8 pages, 3 figure