600 research outputs found

    Intelligent systems: towards a new synthetic agenda

    Get PDF

    Modeling flocks with perceptual agents from a dynamicist perspective

    Get PDF
    Computational simulations of flocks and crowds have typically been processed by a set of logic or syntactic rules. In recent decades, a new generation of systems has emerged from dynamicist approaches in which the agents and the environment are treated as a pair of dynamical systems coupled informationally and mechanically. Their spontaneous interactions allow them to achieve the desired behavior. The main proposition assumes that the agent does not need a full model or to make inferences before taking actions; rather, the information necessary for any action can be derived from the environment with simple computations and very little internal state. In this paper, we present a simulation framework in which the agents are endowed with a sensing device, an oscillator network as controller and actuators to interact with the environment. The perception device is designed as an optic array emulating the principles of the animal retina, which assimilates stimuli resembling optic flow to be captured from the environment. The controller modulates informational variables to action variables in a sensory-motor flow. Our approach is based on the Kuramoto model that describes mathematically a network of coupled phase oscillators and the use of evolutionary algorithms, which is proved to be capable of synthesizing minimal synchronization strategies based on the dynamical coupling between agents and environment. We carry out a comparative analysis with classical implementations taking into account several criteria. It is concluded that we should consider replacing the metaphor of symbolic information processing by that of sensory-motor coordination in problems of multi-agent organizations

    Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

    Get PDF
    The ability to acquire a representation of spatial environment and the ability to localize within it are essential for successful navigation in a-priori unknown environments. The hippocampal formation is believed to play a key role in spatial learning and navigation in animals. This paper briefly reviews the relevant neurobiological and cognitive data and their relation to computational models of spatial learning and localization used in mobile robots. It also describes a hippocampal model of spatial learning and navigation and analyzes it using Kalman filter based tools for information fusion from multiple uncertain sources. The resulting model allows a robot to learn a place-based, metric representation of space in a-priori unknown environments and to localize itself in a stochastically optimal manner. The paper also describes an algorithmic implementation of the model and results of several experiments that demonstrate its capabilities

    The internal maps of insects

    Get PDF

    Real-time synthetic primate vision

    Get PDF

    Perspective Taking in Deep Reinforcement Learning Agents

    Get PDF
    Perspective taking is the ability to take the point of view of another agent. This skill is not unique to humans as it is also displayed by other animals like chimpanzees. It is an essential ability for social interactions, including efficient cooperation, competition, and communication. Here we present our progress toward building artificial agents with such abilities. We implemented a perspective taking task inspired by experiments done with chimpanzees. We show that agents controlled by artificial neural networks can learn via reinforcement learning to pass simple tests that require perspective taking capabilities. We studied whether this ability is more readily learned by agents with information encoded in allocentric or egocentric form for both their visual perception and motor actions. We believe that, in the long run, building better artificial agents with perspective taking ability can help us develop artificial intelligence that is more human-like and easier to communicate with

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals
    • …
    corecore