10,767 research outputs found

    Attractor Metadynamics in Adapting Neural Networks

    Full text link
    Slow adaption processes, like synaptic and intrinsic plasticity, abound in the brain and shape the landscape for the neural dynamics occurring on substantially faster timescales. At any given time the network is characterized by a set of internal parameters, which are adapting continuously, albeit slowly. This set of parameters defines the number and the location of the respective adiabatic attractors. The slow evolution of network parameters hence induces an evolving attractor landscape, a process which we term attractor metadynamics. We study the nature of the metadynamics of the attractor landscape for several continuous-time autonomous model networks. We find both first- and second-order changes in the location of adiabatic attractors and argue that the study of the continuously evolving attractor landscape constitutes a powerful tool for understanding the overall development of the neural dynamics

    Evolutionary Robotics: a new scientific tool for studying cognition

    Get PDF
    We survey developments in Artificial Neural Networks, in Behaviour-based Robotics and Evolutionary Algorithms that set the stage for Evolutionary Robotics in the 1990s. We examine the motivations for using ER as a scientific tool for studying minimal models of cognition, with the advantage of being capable of generating integrated sensorimotor systems with minimal (or controllable) prejudices. These systems must act as a whole in close coupling with their environments which is an essential aspect of real cognition that is often either bypassed or modelled poorly in other disciplines. We demonstrate with three example studies: homeostasis under visual inversion; the origins of learning; and the ontogenetic acquisition of entrainment

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    Enaction-Based Artificial Intelligence: Toward Coevolution with Humans in the Loop

    Full text link
    This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways
    corecore