620 research outputs found

    How does the design of landmarks on a mobile map influence wayfinding experts’ spatial learning during a real-world navigation task?

    Full text link
    Humans increasingly rely on GPS-enabled mobile maps to navigate novel environments. However, this reliance can negatively affect spatial learning, which can be detrimental even for expert navigators such as search and rescue personnel. Landmark visualization has been shown to improve spatial learning in general populations by facilitating object identification between the map and the environment. How landmark visualization supports expert users’ spatial learning during map-assisted navigation is still an open research question. We thus conducted a real-world study with wayfinding experts in an unknown residential neighborhood. We aimed to assess how two different landmark visualization styles (abstract 2D vs. realistic 3D buildings) would affect experts’ spatial learning in a map-assisted navigation task during an emergency scenario. Using a between-subjects design, we asked Swiss military personnel to follow a given route using a mobile map, and to identify five task-relevant landmarks along the route. We recorded experts’ gaze behavior while navigating and examined their spatial learning after the navigation task. We found that experts’ spatial learning improved when they focused their visual attention on the environment, but the direction of attention between the map and the environment was not affected by the landmark visualization style. Further, there was no difference in spatial learning between the 2D and 3D groups. Contrary to previous research with general populations, this study suggests that the landmark visualization style does not enhance expert navigators’ navigation or spatial learning abilities, thus highlighting the need for population-specific mobile map design solutions

    The Parallel Map Theory: Ontogeny of Flexible Spatial Strategies in Young Children

    Get PDF
    The parallel map theory explains that the hippocampus encodes space with two mapping systems: The bearing map created from ―directional cues and stimulus gradients‖; The sketch map constructed from ―positional cues‖. The integrated map combines the two mapping systems. Such parallel functioning may explain paradoxes of spatial learning in intellectual disabilities. This people may be able to memorize their surroundings in a highly detailed way, thus ordering their sensory perceptions into a representation that includes the precise localization of static objects, they are not able to ―map‖ their own spatial relationship to those objects. The detection of moving objects by these same subjects contributes to a primary bearing map. The primary map is thus generated by relying on this kind of static map, but also by detecting moving objects. This process can be described as a spatial mode of processing separate objects within the structure of an absolute reference system

    Schematic Maps and Indoor Wayfinding

    Get PDF
    Schematic maps are often discussed as an adequate alternative of displaying wayfinding information compared to detailed map designs. However, these depictions have not yet been compared and analyzed in-depth. In this paper, we present a user study that evaluates the wayfinding behaviour of participants either using a detailed floor plan or a schematic map that only shows the route to follow and landmarks. The study was conducted in an indoor real-world scenario. The depictions were presented with the help of a mobile navigation system. We analyzed the time it took to understand the wayfinding instruction and the workload of the users. Moreover, we examined how the depictions were visually perceived with a mobile eye tracker. Results show that wayfinders who use the detailed map spend more visual attention on the instructions. Nevertheless, the depiction does not help to solve the task: they also needed more time to orient themselves. Regarding the workload and the wayfinding errors no differences were found

    Collaborative Deep Reinforcement Learning for Joint Object Search

    Full text link
    We examine the problem of joint top-down active search of multiple objects under interaction, e.g., person riding a bicycle, cups held by the table, etc.. Such objects under interaction often can provide contextual cues to each other to facilitate more efficient search. By treating each detector as an agent, we present the first collaborative multi-agent deep reinforcement learning algorithm to learn the optimal policy for joint active object localization, which effectively exploits such beneficial contextual information. We learn inter-agent communication through cross connections with gates between the Q-networks, which is facilitated by a novel multi-agent deep Q-learning algorithm with joint exploitation sampling. We verify our proposed method on multiple object detection benchmarks. Not only does our model help to improve the performance of state-of-the-art active localization models, it also reveals interesting co-detection patterns that are intuitively interpretable

    Low-Resolution Vision for Autonomous Mobile Robots

    Get PDF
    The goal of this research is to develop algorithms using low-resolution images to perceive and understand a typical indoor environment and thereby enable a mobile robot to autonomously navigate such an environment. We present techniques for three problems: autonomous exploration, corridor classification, and minimalistic geometric representation of an indoor environment for navigation. First, we present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32X24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction (0.02%) of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage (99.98%) of the information both spatially and temporally, processing occurs at an average of 1000 frames per second, or equivalently takes a small fraction of the CPU. Second, we present an algorithm using image entropy to detect and classify corridor junctions from low resolution images. Because entropy can be used to perceive depth, it can be used to detect an open corridor in a set of images recorded by turning a robot at a junction by 360 degrees. Our algorithm involves detecting peaks from continuously measured entropy values and determining the angular distance between the detected peaks to determine the type of junction that was recorded (either middle, L-junction, T-junction, dead-end, or cross junction). We show that the same algorithm can be used to detect open corridors from both monocular as well as omnidirectional images. Third, we propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that centerline and wall-floor boundaries can be estimated with reasonable accuracy even in texture-poor environments with low-resolution images. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99.9%
    • 

    corecore