In the context of autonomous navigation, effectively conveying abstract
navigational cues to agents in dynamic environments poses challenges,
particularly when the navigation information is multimodal. To address this
issue, the paper introduces a novel technique termed "Virtual Guidance," which
is designed to visually represent non-visual instructional signals. These
visual cues, rendered as colored paths or spheres, are overlaid onto the
agent's camera view, serving as easily comprehensible navigational
instructions. We evaluate our proposed method through experiments in both
simulated and real-world settings. In the simulated environments, our virtual
guidance outperforms baseline hybrid approaches in several metrics, including
adherence to planned routes and obstacle avoidance. Furthermore, we extend the
concept of virtual guidance to transform text-prompt-based instructions into a
visually intuitive format for real-world experiments. Our results validate the
adaptability of virtual guidance and its efficacy in enabling policy transfer
from simulated scenarios to real-world ones.Comment: Tsung-Chih Chiang, Ting-Ru Liu, Chun-Wei Huang, and Jou-Min Liu
contributed equally to this work; This work has been submitted to the IEEE
for possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl