43,516 research outputs found
Visual imagination and cognitive mapping of a virtual building
We investigated the contribution of visual imagination to the cognitive mapping of a building when initial exploration was simulated either visually by using a passive video walk-through, or mentally by using verbal guidance. Building layout had repeating elements with either rotational or mirror symmetry. Cognitive mapping of the virtual building, determined using questionnaires and map drawings, was present following verbal guidance but inferior to that following video guidance. Mapping was not affected by the building's structural symmetry. However, notably, it correlated with small-scale mental rotation scores for both video and verbal guidance conditions. There was no difference between males and females. A common factor that may have influenced cognitive mapping was the availability of visual information about the relationships of the building elements, either directly perceived (during the video walk-through) or imagined (during the verbal walk-through and/or during recall). Differences in visual imagination, particularly mental rotation, may thus account for some of the individual variance in cognitive mapping of complex built environments, which is relevant to how designers provide navigation-relevant information
Covert Perceptual Capability Development
In this paper, we propose a model to develop
robots’ covert perceptual capability using reinforcement learning. Covert perceptual behavior is treated as action selected by a motivational system. We apply this model to
vision-based navigation. The goal is to enable
a robot to learn road boundary type. Instead
of dealing with problems in controlled environments with a low-dimensional state space,
we test the model on images captured in non-stationary environments. Incremental Hierarchical Discriminant Regression is used to
generate states on the fly. Its coarse-to-fine
tree structure guarantees real-time retrieval
in high-dimensional state space. K Nearest-Neighbor strategy is adopted to further reduce training time complexity
Robot Navigation in Unseen Spaces using an Abstract Map
Human navigation in built environments depends on symbolic spatial
information which has unrealised potential to enhance robot navigation
capabilities. Information sources such as labels, signs, maps, planners, spoken
directions, and navigational gestures communicate a wealth of spatial
information to the navigators of built environments; a wealth of information
that robots typically ignore. We present a robot navigation system that uses
the same symbolic spatial information employed by humans to purposefully
navigate in unseen built environments with a level of performance comparable to
humans. The navigation system uses a novel data structure called the abstract
map to imagine malleable spatial models for unseen spaces from spatial symbols.
Sensorimotor perceptions from a robot are then employed to provide purposeful
navigation to symbolic goal locations in the unseen environment. We show how a
dynamic system can be used to create malleable spatial models for the abstract
map, and provide an open source implementation to encourage future work in the
area of symbolic navigation. Symbolic navigation performance of humans and a
robot is evaluated in a real-world built environment. The paper concludes with
a qualitative analysis of human navigation strategies, providing further
insights into how the symbolic navigation capabilities of robots in unseen
built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and
Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see
https://btalb.github.io/abstract_map/ for access to softwar
Recommended from our members
An interface to virtual environments for people who are blind using Wii technology - mental models and navigation
Accessible games, both for serious and for entertainment purposes, would allow inclusion and participation for those with disabilities. Research into the development of accessible games, and accessible virtual environments, is discussed. Research into accessible Virtual Environments has demonstrated great potential for allowing people who are blind to explore new spaces, reducing their reliance on guides, and aiding development of more efficient spatial maps and strategies. Importantly, Lahav and Mioduser (2005, 2008) have demonstrated that, when exploring virtual spaces, people who are blind use more and different strategies than when exploring real physical spaces, and develop relatively accurate spatial representations of them. The present paper describes the design, development and evaluation of a system in which a virtual environment may be explored by people who are blind using Nintendo Wii devices, with auditory and haptic feedback. The nature of the various types of feedback is considered, with the aim of creating an intuitive and usable system. Using Wii technology has many advantages, not least of which are that it is mainstream, readily available and cheap. The potential of the system for exploration and navigation is demonstrated. Results strongly support the possibilities of the system for facilitating and supporting the construction of cognitive maps and spatial strategies. Intelligent support is discussed. Systems such as the present one will facilitate the development of accessible games, and thus enable Universal Design and accessible interactive technology to become more accepted and widespread
A biologically inspired meta-control navigation system for the Psikharpax rat robot
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics
- …