12 research outputs found

    Evolving a Neural Model of Insect Path Integration

    Get PDF
    Path integration is an important navigation strategy in many animal species. We use a genetic algorithm to evolve a novel neural model of path integration, based on input from cells that encode the heading of the agent in a manner comparable to the polarization-sensitive interneurons found in insects. The home vector is encoded as a population code across a circular array of cells that integrate this input. This code can be used to control return to the home position. We demonstrate the capabilities of the network under noisy conditions in simulation and on a robot

    Rapid response of head direction cells to reorienting visual cues: A computational model

    No full text
    We model head direction (HD) cells in the rat's limbic system. The intrinsic dynamics of the HD model is determined by a continuous attractor network based on spiking formal neurons. Synaptic excitation is mediated by NMDA and AMPA formal receptors, whereas inhibition is mediated by GABA receptors. We focus on the temporal aspects of state transitions of the HD system following reorienting visual stimuli and reproduce the short transient latencies (about ) observed in the anterodorsal thalamic nucleus (ADN). A contribution of the model is an experimentally testable prediction concerning the state update dynamics as a function of the magnitude of reorientation. The results predict a progressive shift of the preferred directions of ADN cells for angles smaller than , whereas an abrupt jump is predicted for larger offsets

    Recognizing Internal States of Other Agents to Anticipate and Coordinate Interactions

    Get PDF
    International audienceIn multi-agent systems, anticipating the behavior of other agents constitutes a difficult problem. In this paper we present the case where a cognitive agent is inserted into an unknown environment composed of different kinds of other objects and agents; our cognitive agent needs to incrementally learn a model of the environment dynamics, doing it only from its interaction experience; the learned model can then be used to define a policy of actions. It is relatively easy to do so when the agent interacts with static objects, with simple mobile objects, or with trivial reactive agents; however, when the agent deals with other complex agents that may change their behaviors according to some non-directly observable internal properties (like emotional or intentional states), the construction of a model becomes significantly harder. The complete system can be described as a Factored and Partially Observable Markov Decision Process (FPOMDP); our agent implements the Constructivist Anticipatory Learning Mechanism (CALM) algorithm, and the experiment (called mept) shows that the induction of non-observable variables enable the agent to learn a deterministic model of most of the system events (if it represents a well-structured universe), allowing it to anticipate other agents actions and to adapt to them, even if some interactions appear as non-deterministic in a first sight
    corecore