Humans have a natural ability to perform semantic associations with the
surrounding objects in the environment. This allows them to create a mental map
of the environment which helps them to navigate on-demand when given a
linguistic instruction. A natural goal in Vision Language Navigation (VLN)
research is to impart autonomous agents with similar capabilities. Recently
introduced VL Maps \cite{huang23vlmaps} take a step towards this goal by
creating a semantic spatial map representation of the environment without any
labelled data. However, their representations are limited for practical
applicability as they do not distinguish between different instances of the
same object. In this work, we address this limitation by integrating
instance-level information into spatial map representation using a community
detection algorithm and by utilizing word ontology learned by large language
models (LLMs) to perform open-set semantic associations in the mapping
representation. The resulting map representation improves the navigation
performance by two-fold (233\%) on realistic language commands with
instance-specific descriptions compared to VL Maps. We validate the
practicality and effectiveness of our approach through extensive qualitative
and quantitative experiments