6,544 research outputs found

    Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale.

    Get PDF
    Spatial navigation is a fascinating behavior that is essential for our everyday lives. It involves nearly all sensory systems, it requires numerous parallel computations, and it engages multiple memory systems. One of the key problems in this field pertains to the question of reference frames: spatial information such as direction or distance can be coded egocentrically-relative to an observer-or allocentrically-in a reference frame independent of the observer. While many studies have associated striatal and parietal circuits with egocentric coding and entorhinal/hippocampal circuits with allocentric coding, this strict dissociation is not in line with a growing body of experimental data. In this review, we discuss some of the problems that can arise when studying the neural mechanisms that are presumed to support different spatial reference frames. We argue that the scale of space in which a navigation task takes place plays a crucial role in determining the processes that are being recruited. This has important implications, particularly for the inferences that can be made from animal studies in small scale space about the neural mechanisms supporting human spatial navigation in large (environmental) spaces. Furthermore, we argue that many of the commonly used tasks to study spatial navigation and the underlying neuronal mechanisms involve different types of reference frames, which can complicate the interpretation of neurophysiological data

    Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

    Get PDF
    The ability to acquire a representation of spatial environment and the ability to localize within it are essential for successful navigation in a-priori unknown environments. The hippocampal formation is believed to play a key role in spatial learning and navigation in animals. This paper briefly reviews the relevant neurobiological and cognitive data and their relation to computational models of spatial learning and localization used in mobile robots. It also describes a hippocampal model of spatial learning and navigation and analyzes it using Kalman filter based tools for information fusion from multiple uncertain sources. The resulting model allows a robot to learn a place-based, metric representation of space in a-priori unknown environments and to localize itself in a stochastically optimal manner. The paper also describes an algorithmic implementation of the model and results of several experiments that demonstrate its capabilities
    corecore