'Institute of Electrical and Electronics Engineers (IEEE)'
Doi
Abstract
View-based and Cartesian representations provide rival accounts of
visual navigation in humans, and here we explore possible models
for the view-based case. A visual “homing” experiment was undertaken
by human participants in immersive virtual reality. The distributions
of end-point errors on the ground plane differed significantly
in shape and extent depending on visual landmark configuration and
relative goal location. A model based on simple visual cues captures
important characteristics of these distributions. Augmenting visual
features to include 3D elements such as stereo and motion parallax
result in a set of models that describe the data accurately, demonstrating
the effectiveness of a view-based approach