16 research outputs found

    A connectionist model of spatial learning in the rat

    Get PDF
    When animals explore an environment, they store useful spatial information in their brains. In subsequent visits, they can recall this information and thus avoid dangerous places or find again a food location. This ability, which may be crucial for the animal's survival, is termed "spatial learning". In the late 1940s, theoretical considerations have led researchers to the conclusion that rats establish a "cognitive map" of their environment. This spatial representation can then be used by the animal in order to navigate towards a rewarding location. In 1971, researchers have for the first time found direct evidence that the hippocampus, a brain area in the limbic system, may contain such a cognitive map. The activity of neurons in the hippocampus of rats tends to be highly correlated with the animal's position within the environment. These "place cells" have since been the target of a large body of research. Apart from spatial learning, the hippocampus seems to be involved in a more general type of learning, namely in the formation of so-called "episodic memories". Models of hippocampal function could thus provide valuable insights for the understanding of memory processes in general. Insights from animal navigation could also prove beneficial for the design of autonomous mobile robots. constructing a consistent map of the environment from experience, and using it for solving navigation problems are difficult tasks. Incorporating principles borrowed from animal navigation may help building more robust and autonomous robots. The main objective of this thesis is to develop a neural network model of spatial learning in the rat. The system should be capable of learning how to navigate to a hidden reward location based on realistic sensory input. The system is validated on a mobile robot. Our model consists of several interconnected brain regions, each represented by a population of neurons. The model closely follows experimental results on functional, anatomical and neurophysiological properties of these regions. One population, for instance, models the hippocampal place cells. A head-direction system closely interacts with the place cells and endows the robot with a sense of direction. A population of motor-related cells codes for the direction of the next movement. Associations are learnt between place cells and motor cells in order to navigate towards a goal location. This study allows us to make experimental predictions on functional and neurophysiological properties of the modelled brain regions. In our validation experiments, the robot successfully establishes a spatial representation. The robot can localise itself in the environment and quickly learns to navigate to the hidden goal location

    A computational model of parallel navigation systems in rodents

    Get PDF
    Several studies in rats support the idea of multiple neural systems competing to select the best action for reaching a goal or food location. Locale navigation strategies, necessary for reaching invisible goals, seem to be mediated by the hippocampus and the ventral and dorsomedial striatum whereas taxon strategies, applied for approaching goals in the visual field, are believed to involve the dorsolateral striatum. A computational model of action selection is presented, in which different experts, implementing locale and taxon strategies, compete in order to select the appropriate behavior for the current task. The model was tested in a simulated robot using an experimental paradigm that dissociates the use of cue and spatial informatio

    Reinforcement learning in continuous state and action space

    No full text
    To solve complex navigation tasks, autonomous agents such as rats or mobile robots often employ spatial representations. These “maps” can be used for localisation and navigation. We propose a model for spatial learning and navigation based on reinforcement learning. The state space is represented by a population of hippocampal place cells whereas a large number of locomotor neurons in nucleus accumbens forms the action space. Using overlapping receptive fields for both populations, state/action mappings rapidly generalise during learning. The population vector allows a continuous interpretation of both state and action spaces. An eligibility trace is used to propagate reward information back in time. It enables the modification of behaviours for recent states. We propose a biologically plausible mechanism for this trace of events where spike timing dependent plasticity triggers the storing of recent state/action pairs. These pairs, however, are forgotten in the absence of a reward-related signal such as dopamine. The model is validated on a simulated robot platform

    A Computational Model of Parallel Navigation Systems in Rodents

    No full text

    Adaptive sensory processing for efficient place coding

    No full text
    International audienceThis work presents a neural model of self-localisation implemented on a simulated mobile robot with a realistic visual input. A population of modelled place cells with overlapping receptive fields is constructed online during exploration. In contrast to similar models of place cells, parameters of neurons in the sensory pathway adapt online to the environments statistics in order to maximise information transmission. The robot's position can be decoded from the population activity with high accuracy. The information transmission rate of the cells is comparable to the information rate of biological place cells

    A Computational Model of Parallel Navigation Systems in Rodents

    Get PDF
    International audienceSeveral studies in rats support the idea of multiple neural systems competing to select the best action for reaching a goal or food location. Locale navigation strategies, necessary for reaching invisible goals, seem to be mediated by the hippocampus and the ventral and dorsomedial striatum whereas taxon strategies, applied for approaching goals in the visual field, are believed to involve the dorsolateral striatum. A computational model of action selection is presented, in which different experts, implementing locale and taxon strategies, compete in order to select the appropriate behavior for the current task. The model was tested in a simulated robot using an experimental paradigm that dissociates the use of cue and spatial information
    corecore