3 research outputs found
Implicit Obstacle Map-driven Indoor Navigation Model for Robust Obstacle Avoidance
Robust obstacle avoidance is one of the critical steps for successful
goal-driven indoor navigation tasks.Due to the obstacle missing in the visual
image and the possible missed detection issue, visual image-based obstacle
avoidance techniques still suffer from unsatisfactory robustness. To mitigate
it, in this paper, we propose a novel implicit obstacle map-driven indoor
navigation framework for robust obstacle avoidance, where an implicit obstacle
map is learned based on the historical trial-and-error experience rather than
the visual image. In order to further improve the navigation efficiency, a
non-local target memory aggregation module is designed to leverage a non-local
network to model the intrinsic relationship between the target semantic and the
target orientation clues during the navigation process so as to mine the most
target-correlated object clues for the navigation decision. Extensive
experimental results on AI2-Thor and RoboTHOR benchmarks verify the excellent
obstacle avoidance and navigation efficiency of our proposed method. The core
source code is available at https://github.com/xwaiyy123/object-navigation.Comment: 9 pages, 7 figures, 43 references. This paper has been accepted for
ACM MM 202
Kanerva++: extending The Kanerva Machine with differentiable, locally block allocated latent memory
Episodic and semantic memory are critical components of the human memory
model. The theory of complementary learning systems (McClelland et al., 1995)
suggests that the compressed representation produced by a serial event
(episodic memory) is later restructured to build a more generalized form of
reusable knowledge (semantic memory). In this work we develop a new principled
Bayesian memory allocation scheme that bridges the gap between episodic and
semantic memory via a hierarchical latent variable model. We take inspiration
from traditional heap allocation and extend the idea of locally contiguous
memory to the Kanerva Machine, enabling a novel differentiable block allocated
latent memory. In contrast to the Kanerva Machine, we simplify the process of
memory writing by treating it as a fully feed forward deterministic process,
relying on the stochasticity of the read key distribution to disperse
information within the memory. We demonstrate that this allocation scheme
improves performance in memory conditional image generation, resulting in new
state-of-the-art conditional likelihood values on binarized MNIST (<=41.58
nats/image) , binarized Omniglot (<=66.24 nats/image), as well as presenting
competitive performance on CIFAR10, DMLab Mazes, Celeb-A and ImageNet32x32