91,341 research outputs found

    Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

    Get PDF
    The ability to acquire a representation of spatial environment and the ability to localize within it are essential for successful navigation in a-priori unknown environments. The hippocampal formation is believed to play a key role in spatial learning and navigation in animals. This paper briefly reviews the relevant neurobiological and cognitive data and their relation to computational models of spatial learning and localization used in mobile robots. It also describes a hippocampal model of spatial learning and navigation and analyzes it using Kalman filter based tools for information fusion from multiple uncertain sources. The resulting model allows a robot to learn a place-based, metric representation of space in a-priori unknown environments and to localize itself in a stochastically optimal manner. The paper also describes an algorithmic implementation of the model and results of several experiments that demonstrate its capabilities

    Space, Time and Learning in the Hippocampus: How Fine Spatial and Temporal Scales Are Expanded into Population Codes for Behavioral Control

    Full text link
    The hippocampus participates in multiple functions, including spatial navigation, adaptive timing, and declarative (notably, episodic) memory. How does it carry out these particular functions? The present article proposes that hippocampal spatial and temporal processing are carried out by parallel circuits within entorhinal cortex, dentate gyrus, and CA3 that are variations of the same circuit design. In particular, interactions between these brain regions transform fine spatial and temporal scales into population codes that are capable of representing the much larger spatial and temporal scales that are needed to control adaptive behaviors. Previous models of adaptively timed learning propose how a spectrum of cells tuned to brief but different delays are combined and modulated by learning to create a population code for controlling goal-oriented behaviors that span hundreds of milliseconds or even seconds. Here it is proposed how projections from entorhinal grid cells can undergo a similar learning process to create hippocampal place cells that can cover a space of many meters that are needed to control navigational behaviors. The suggested homology between spatial and temporal processing may clarify how spatial and temporal information may be integrated into an episodic memory.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Robustness of 3D Deep Learning in an Adversarial Setting

    Full text link
    Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation. Deep learning has revolutionized state-of-the-art performance for tasks in 3D environments; however, relatively little is known about the robustness of these approaches in an adversarial setting. The lack of comprehensive analysis makes it difficult to justify deployment of 3D deep learning models in real-world, safety-critical applications. In this work, we develop an algorithm for analysis of pointwise robustness of neural networks that operate on 3D data. We show that current approaches presented for understanding the resilience of state-of-the-art models vastly overestimate their robustness. We then use our algorithm to evaluate an array of state-of-the-art models in order to demonstrate their vulnerability to occlusion attacks. We show that, in the worst case, these networks can be reduced to 0% classification accuracy after the occlusion of at most 6.5% of the occupied input space.Comment: 10 pages, 8 figures, 1 tabl

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    DYNAMICS OF COLLABORATIVE NAVIGATION AND APPLYING DATA DRIVEN METHODS TO IMPROVE PEDESTRIAN NAVIGATION INSTRUCTIONS AT DECISION POINTS FOR PEOPLE OF VARYING SPATIAL APTITUDES

    Get PDF
    Cognitive Geography seeks to understand individual decision-making variations based on fundamental cognitive differences between people of varying spatial aptitudes. Understanding fundamental behavioral discrepancies among individuals is an important step to improve navigation algorithms and the overall travel experience. Contemporary navigation aids, although helpful in providing turn-by-turn directions, lack important capabilities to distinguish decision points for their features and importance. Existing systems lack the ability to generate landmark or decision point based instructions using real-time or crowd sourced data. Systems cannot customize personalized instructions for individuals based on inherent spatial ability, travel history, or situations. This dissertation presents a novel experimental setup to examine simultaneous wayfinding behavior for people of varying spatial abilities. This study reveals discrepancies in the information processing, landmark preference and spatial information communication among groups possessing differing abilities. Empirical data is used to validate computational salience techniques that endeavor to predict the difficulty of decision point use from the structure of the routes. Outlink score and outflux score, two meta-algorithms that derive secondary scores from existing metrics of network analysis, are explored. These two algorithms approximate human cognitive variation in navigation by analyzing neighboring and directional effect properties of decision point nodes within a routing network. The results are validated by a human wayfinding experiment, results show that these metrics generally improve the prediction of errors. In addition, a model of personalized weighting for users\u27 characteristics is derived from a SVMrank machine learning method. Such a system can effectively rank decision point difficulty based on user behavior and derive weighted models for navigators that reflect their individual tendencies. The weights reflect certain characteristics of groups. Such models can serve as personal travel profiles, and potentially be used to complement sense-of-direction surveys in classifying wayfinders. A prototype with augmented instructions for pedestrian navigation is created and tested, with particular focus on investigating how augmented instructions at particular decision points affect spatial learning. The results demonstrate that survey knowledge acquisition is improved for people with low spatial ability while decreased for people of high spatial ability. Finally, contributions are summarized, conclusions are provided, and future implications are discussed

    Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

    Full text link
    Human impressions of robot performance are often measured through surveys. As a more scalable and cost-effective alternative, we study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques. To this end, we first contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in a Virtual Reality simulation, together with impressions of robot performance provided by users on a 5-point scale. Second, we contribute analyses of how well humans and supervised learning techniques can predict perceived robot performance based on different combinations of observation types (e.g., facial, spatial, and map features). Our results show that facial expressions alone provide useful information about human impressions of robot performance; but in the navigation scenarios we tested, spatial features are the most critical piece of information for this inference task. Also, when evaluating results as binary classification (rather than multiclass classification), the F1-Score of human predictions and machine learning models more than doubles, showing that both are better at telling the directionality of robot performance than predicting exact performance ratings. Based on our findings, we provide guidelines for implementing these predictions models in real-world navigation scenarios
    • …
    corecore