Visual-audio navigation (VAN) is attracting more and more attention from the
robotic community due to its broad applications, \emph{e.g.}, household robots
and rescue robots. In this task, an embodied agent must search for and navigate
to the sound source with egocentric visual and audio observations. However, the
existing methods are limited in two aspects: 1) poor generalization to unheard
sound categories; 2) sample inefficient in training. Focusing on these two
problems, we propose a brain-inspired plug-and-play method to learn a
semantic-agnostic and spatial-aware representation for generalizable
visual-audio navigation. We meticulously design two auxiliary tasks for
respectively accelerating learning representations with the above-desired
characteristics. With these two auxiliary tasks, the agent learns a
spatially-correlated representation of visual and audio inputs that can be
applied to work on environments with novel sounds and maps. Experiment results
on realistic 3D scenes (Replica and Matterport3D) demonstrate that our method
achieves better generalization performance when zero-shot transferred to scenes
with unseen maps and unheard sound categories