Abstract

Many animals rely on robust visual navigation which can be explained by snapshot models, where an agent is assumed to store egocentric panoramic images and subsequently use them to recover a heading by comparing current views to the stored snapshots. Long-range route navigation can also be explained by such models, by storing multiple snapshots along a training route and comparing the current image to these. For such models, memory capacity and comparison time increase dramatically with route length, rendering them unfeasible for small-brained insects and low-power robots where computation and storage are limited. One way to reduce the requirements is to use a compressed image representation. Inspired by the filter bank-like arrangement of the visual system, we here investigate how a frequency-based image representation influences the performance of a typical snapshot model. By decomposing views into wavelet coefficients at different levels and orientations, we achieve a compressed visual representation that remains robust when used for navigation. Our results indicate that route following based on wavelet coefficients is not only possible but gives increased performance over a range of other models

    Similar works