1,374 research outputs found

    Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot

    Get PDF
    In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory.Peer reviewedFinal Published versio

    Enaction-Based Artificial Intelligence: Toward Coevolution with Humans in the Loop

    Full text link
    This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways

    From cognitivism to autopoiesis: towards a computational framework for the embodied mind

    Get PDF
    Predictive processing (PP) approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications—with a firm commitment to modular, internalistic mental representation—to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory (e.g., of attention or consciousness) must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle (FEP) attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal ‘representations’ arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories (e.g., predictive processing) by which to guide discovery through the formal modelling of the embodied mind

    Self-efficacy: Toward a unifying theory of behavioral change.

    Get PDF

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research

    Learning action-oriented models through active inference

    Get PDF
    Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms

    Using enactive robotics to think outside of the problem-solving box: How sensorimotor contingencies constrain the forms of emergent autononomous habits

    Get PDF
    We suggest that the influence of biology in 'biologically inspired robotics' can be embraced at a deeper level than is typical, if we adopt an enactive approach that moves the focus of interest from how problems are solved to how problems emerge in the first place. In addition to being inspired by mechanisms found in natural systems or by evolutionary design principles directed at solving problems posited by the environment, we can take inspiration from the precarious, self-maintaining organization of living systems to investigate forms of cognition that are also precarious and self-maintaining and that thus also, like life, have their own problems that must be be addressed if they are to persist. In this vein, we use a simulation to explore precarious, self-reinforcing sensorimotor habits as a building block for a robot's behavior. Our simulations of simple robots controlled by an Iterative Deformable Sensorimotor Medium demonstrate the spontaneous emergence of different habits, their re-enactment and the organization of an ecology of habits within each agent. The form of the emergent habits is constrained by the sensory modality of the robot such that habits formed under one modality (vision) are more similar to each other than they are to habits formed under another (audition). We discuss these results in the wider context of: (a) enactive approaches to life and mind, (b) sensorimotor contingency theory, (c) adaptationist vs. structuralist explanations in biology, and (d) the limits of functionalist problem-solving approaches to (artificial) intelligence.This work was supported in part via funding from the Digital Life Institute, University of Auckland. XB acknowledges funding from the Spanish Ministry of Science and Innovation for the research project Outonomy PID2019-104576GB-I00 and IAS-Research group funding IT1668-22 from Basque Government

    Robotic sensorimotor interaction strategies

    Get PDF
    Abstract. In this thesis we investigate the mathematical modeling of cognition using sensorimotor transition systems. The focus of the thesis is enactivism, where an agent learns to think through actions. As a theoretical basis for our implementation, we discuss a mathematical model of enactivist cognition, sensorimotor interaction and how they can be used as algorithmic aides for studying theoretical problems in robotic systems. In Chapter 3 of this thesis, we introduce a platform which was developed in the University of Oulu as a software project and explain how enactivism and sensorimotor interaction have been taken advantage of, in developing a 2D platform. This platform enables one to concretely implement and explore different interaction strategies that allow an agent to construct internal models of its surroundings. The agent in the platform is a multi-jointed robotic arm, which maneuvers through an obstacle-filled environment. The robotic arm tries to explore its environment with minimal sensory feedback, using algorithms created by the user of the platform. Our main goal on this thesis is to implement new features to this platform. We implement a memory functionality which allows the robotic arm to store all its performed actions. The memory helps the agent infer to a greater extent its surroundings from a limited sequence of action-observation pairs, and helps it in getting a better grasp of the environment. In addition, we implement other methods and functionalities, such as an obstacle sensor, a graph visualization of the internal models, etc. to enhance the perceptual ability of the robotic arm. In Section 5, we develop an algorithm for a simple 2D environment with no obstacles. Here the robotic arm makes a 360-degree move in four steps to perceive its surroundings and generates a state machine graph to visualize its internal model of the environment. The goal of the algorithm is to build an accurate representation of the environment with the help of memory. Through this algorithm we are able to evaluate the performance of the newly implemented features. We also test the platform through unit testing for finding and resolving bugs.Sensorimotoriset vuorovaikutusstrategiat robotiikassa. Tiivistelmä. Tässä tutkielmassa tutkimme kognition matemaattista mallintamista käyttämällä sensorimotorisia transitio-järjestelmiä. Tutkielman keskiössä on enaktivismi, jossa agentti oppii ajattelemaan toiminnan kautta. Teoreettisena perustana toteutuksellemme käsittelemme matemaattista mallia enaktivistisesta kognitiosta, sensorimotorista vuorovaikutusta ja kuinka niitä voidaan käyttää algoritmien apuvälineinä teoreettisten ongelmien tutkimisessa robottiikkajärjestelmissä. Tutkielman luvussa 3 esittelemme alustan, joka on kehitetty Oulun yliopistossa ohjelmistoprojektina, ja selittämme miten enaktivismia ja sensomotorista vuorovaikutusta on hyödynnetty 2D-alustan kehittämisessä. Alusta mahdollistaa erilaisten vuorovaikutusstrategioiden konkreettisen toteuttamisen ja tutkimisen. Näiden avulla agentti rakentaa sisäisiä malleja ympäristöstään. Alustassa mallinnettu agentti on moninivelinen robottikäsi, joka liikkuu esteitä sisältävässä ympäristössä. Robottikäsi pyrkii tutkimaan ympäristöään minimaalisen sensoritiedon avulla käyttämällä alustan käyttäjän luomia algoritmeja. Tutkielmamme päätavoite on kehittää uusia ominaisuuksia tälle alustalle. Toteutamme muistitoiminnallisuuden, jonka avulla robottikäsi tallentaa kaikki suoritetut toiminnot. Muisti auttaa agenttia päättelemään enemmän ympäristöstään rajoitettujen toiminta-havainto-parien avulla, ja auttaa sitä ympäristön hahmottamisessa. Lisäksi kehitämme muita menetelmiä ja toiminnallisuuksia kuten estesensorin ja sisäisten mallien graafisen visualisoinnin parantaaksemme robottikäden havainnointikykyä. Tutkielman myöhemmässä osassa kehitämme algoritmin yksinkertaiselle 2D-ympäristölle ilman esteitä. Siinä robottikäsi tekee 360 asteen liikkeen neljässä vaiheessa havainnoidakseen ympäristönsä, ja luo tilasiirtymäkaavion visualisoidakseen sisäisen mallinsa ympäristöstä. Algoritmin tavoitteena on rakentaa tarkka malli ympäristöstä muistin avulla. Tämän algoritmin avulla pystymme arvioimaan kehittämiemme uusien ominaisuuksien toimintaa. Testaamme alustaa myös yksikkötesteillä löytääksemme ja korjataksemme virheitä
    corecore