5 research outputs found

    Modelling Path Integrator Recalibration Using Hippocampal Place cells

    Get PDF
    The firing activities of place cells in the rat hippocampus exhibit strong correlations to the animal's location. External (e.g. visual) as well as internal (proprioceptive and vestibular) sensory information take part in controlling hippocampal place fields. Previously it has been observed that when rats shuttle between a movable origin and a fixed target the hippocampus encodes position in two different frames of reference. This paper presents a new model of hippocampal place cells that explains place coding in multiple reference frames by continuous interaction between visual and self-motion information. The model is tested using a simulated mobile robot in a real-world experimental paradigm

    An artificial life environment for autonomous virtual agents with multi-sensorial and multi-perceptive features

    Get PDF
    Our approach is based on the multi-sensory integration of the standard theory of neuroscience, where signals of a single object coming from distinct sensory systems are combined. The acquisition steps of signals, filtering, selection and simplification intervening before proprioception, active and predictive perception are integrated into virtual sensors and a virtual environment. We will focus on two aspects: 1) the assignment problem: determining which sensory stimuli belong to the same virtual object and (2) the sensory recoding problem: recoding signals in a common format before combining them. We have developed three novel methodologies to map the information coming from the virtual sensors of vision, audition and touch as well as that of the virtual environment in the form of a 'cognitive map'. Copyright © 2004 John Wiley and Sons, Ltd

    A connectionist model of spatial learning in the rat

    Get PDF
    When animals explore an environment, they store useful spatial information in their brains. In subsequent visits, they can recall this information and thus avoid dangerous places or find again a food location. This ability, which may be crucial for the animal's survival, is termed "spatial learning". In the late 1940s, theoretical considerations have led researchers to the conclusion that rats establish a "cognitive map" of their environment. This spatial representation can then be used by the animal in order to navigate towards a rewarding location. In 1971, researchers have for the first time found direct evidence that the hippocampus, a brain area in the limbic system, may contain such a cognitive map. The activity of neurons in the hippocampus of rats tends to be highly correlated with the animal's position within the environment. These "place cells" have since been the target of a large body of research. Apart from spatial learning, the hippocampus seems to be involved in a more general type of learning, namely in the formation of so-called "episodic memories". Models of hippocampal function could thus provide valuable insights for the understanding of memory processes in general. Insights from animal navigation could also prove beneficial for the design of autonomous mobile robots. constructing a consistent map of the environment from experience, and using it for solving navigation problems are difficult tasks. Incorporating principles borrowed from animal navigation may help building more robust and autonomous robots. The main objective of this thesis is to develop a neural network model of spatial learning in the rat. The system should be capable of learning how to navigate to a hidden reward location based on realistic sensory input. The system is validated on a mobile robot. Our model consists of several interconnected brain regions, each represented by a population of neurons. The model closely follows experimental results on functional, anatomical and neurophysiological properties of these regions. One population, for instance, models the hippocampal place cells. A head-direction system closely interacts with the place cells and endows the robot with a sense of direction. A population of motor-related cells codes for the direction of the next movement. Associations are learnt between place cells and motor cells in order to navigate towards a goal location. This study allows us to make experimental predictions on functional and neurophysiological properties of the modelled brain regions. In our validation experiments, the robot successfully establishes a spatial representation. The robot can localise itself in the environment and quickly learns to navigate to the hidden goal location

    MĂ©thodes infographiques pour l'apprentissage des agents autonomes virtuels

    Get PDF
    There are two primary approaches to behavioural animation of an Autonomous Virtual Agent (AVA). The first one, or behavioural model, defines how AVA reacts to the current state of its environment. In the second one, or cognitive model, this AVA uses a thought process allowing it to deliberate over its possible actions. Despite the success of these approaches in several domains, there are two notable limitations which we address in this thesis. First, cognitive models are traditionally very slow to execute, as a tree search, in the form of mapping: states → actions, must be performed. On the one hand, an AVA can only make sub-optimal decisions and, on the other hand, the number of AVAs that can be used simultaneously in real-time is limited. These constraints restrict their applications to a small set of candidate actions. Second, cognitive and behavioural models can act unexpectedly, producing undesirable behaviour in certain regions of the state space. This is because it may be impossible to exhaustively test them for the entire state space, especially if the state space is continuous. This can be worrisome for end-user applications involving AVAs, such as training simulators for cars and aeronautics. Our contributions include the design of novel learning methods for approximating behavioural and cognitive models. They address the problem of input selection helped by a novel architecture ALifeE including virtual sensors and perception, regardless of the machine learning technique utilized. The input dimensionality must be kept as small as possible, this is due to the "curse of dimensionality", well known in machine learning. Thus, ALifeE simplifies and speeds up the process for the designer
    corecore