645 research outputs found

    The Ecology of Open-Ended Skill Acquisition: Computational framework and experiments on the interactions between environmental, adaptive, multi-agent and cultural dynamics

    Get PDF
    An intriguing feature of the human species is our ability to continuously invent new problems and to proactively acquiring new skills in order to solve them: what is called open-ended skill acquisition (OESA). Understanding the mechanisms underlying OESA is an important scientific challenge in both cognitive science (e.g. by studying infant cognitive development) and in artificial intelligence (aiming at computational architectures capable of open-ended learning). Both fields, however, mostly focus on cognitive and social mechanisms at the scale of an individual’s life. It is rarely acknowledged that OESA, an ability that is fundamentally related to the characteristics of human intelligence, has been necessarily shaped by ecological, evolutionary and cultural mechanisms interacting at multiple spatiotemporal scales. In this thesis, I present a research program aiming at understanding, modelingand simulating the dynamics of OESA in artificial systems, grounded in theories studying its eco-evolutionary bases in the human species. It relies on a conceptual framework expressing the complex interactions between environmental, adaptive, multi-agent and cultural dynamics. Three main research questions are developed and I present a selection of my contributions for each of them.- What are the ecological conditions favoring the evolution of skill acquisition?- How to bootstrap the formation of a cultural repertoire in populations of adaptive agents?- What is the role of cultural evolution in the open-ended dynamics of human skill acquisition?By developing these topics, we will reveal interesting relationships between theories in human evolution and recent approaches in artificial intelligence. This will lead to the proposition of a humanist perspective on AI: using it as a family of computational tools that can help us to explore and study the mechanisms driving open-ended skill acquisition in both artificial and biological systems, as a way to better understand the dynamics of our own species within its whole ecological context. This document presents an overview of my scientific trajectory since the start of my PhD thesis in 2007, the detail of my current research program, a selection of my contributions as well as perspectives for future work

    my Human Brain Project (mHBP)

    Get PDF
    How can we make an agent that thinks like us humans? An agent that can have proprioception, intrinsic motivation, identify deception, use small amounts of energy, transfer knowledge between tasks and evolve? This is the problem that this thesis is focusing on. Being able to create a piece of software that can perform tasks like a human being, is a goal that, if achieved, will allow us to extend our own capabilities to a very high level, and have more tasks performed in a predictable fashion. This is one of the motivations for this thesis. To address this problem, we have proposed a modular architecture for Reinforcement Learning computation and developed an implementation to have this architecture exercised. This software, that we call mHBP, is created in Python using Webots as an environment for the agent, and Neo4J, a graph database, as memory. mHBP takes the sensory data or other inputs, and produces, based on the body parts / tools that the agent has available, an output consisting of actions to perform. This thesis involves experimental design with several iterations, exploring a theoretical approach to RL based on graph databases. We conclude, with our work in this thesis, that it is possible to represent episodic data in a graph, and is also possible to interconnect Webots, Python and Neo4J to support a stable architecture for Reinforcement Learning. In this work we also find a way to search for policies using the Neo4J querying language: Cypher. Another key conclusion of this work is that state representation needs to have further research to find a state definition that enables policy search to produce more useful policies. The article “REINFORCEMENT LEARNING: A LITERATURE REVIEW (2020)” at Research Gate with doi 10.13140/RG.2.2.30323.76327 is an outcome of this thesis.Como podemos criar um agente que pense como nós humanos? Um agente que tenha propriocepção, motivação intrínseca, seja capaz de identificar ilusão, usar pequenas quantidades de energia, transferir conhecimento entre tarefas e evoluir? Este é o problema em que se foca esta tese. Ser capaz de criar uma peça de software que desempenhe tarefas como um ser humano é um objectivo que, se conseguido, nos permitirá estender as nossas capacidades a um nível muito alto, e conseguir realizar mais tarefas de uma forma previsível. Esta é uma das motivações desta tese. Para endereçar este problema, propomos uma arquitectura modular para computação de aprendizagem por reforço e desenvolvemos uma implementação para exercitar esta arquitetura. Este software, ao qual chamamos mHBP, foi criado em Python usando o Webots como um ambiente para o agente, e o Neo4J, uma base de dados de grafos, como memória. O mHBP recebe dados sensoriais ou outros inputs, e produz, baseado nas partes do corpo / ferramentas que o agente tem disponíveis, um output que consiste em ações a desempenhar. Uma boa parte desta tese envolve desenho experimental com diversas iterações, explorando uma abordagem teórica assente em bases de dados de grafos. Concluímos, com o trabalho nesta tese, que é possível representar episódios em um grafo, e que é, também, possível interligar o Webots, com o Python e o Neo4J para suportar uma arquitetura estável para a aprendizagem por reforço. Neste trabalho, também, encontramos uma forma de procurar políticas usando a linguagem de pesquisa do Neo4J: Cypher. Outra conclusão chave deste trabalho é que a representação de estados necessita de mais investigação para encontrar uma definição de estado que permita à pesquisa de políticas produzir políticas que sejam mais úteis. O artigo “REINFORCEMENT LEARNING: A LITERATURE REVIEW (2020)” no Research Gate com o doi 10.13140/RG.2.2.30323.76327 é um sub-produto desta tese

    The active inference approach to ecological perception: general information dynamics for natural and artificial embodied cognition

    Get PDF
    The emerging neurocomputational vision of humans as embodied, ecologically embedded, social agents—who shape and are shaped by their environment—offers a golden opportunity to revisit and revise ideas about the physical and information-theoretic underpinnings of life, mind, and consciousness itself. In particular, the active inference framework (AIF) makes it possible to bridge connections from computational neuroscience and robotics/AI to ecological psychology and phenomenology, revealing common underpinnings and overcoming key limitations. AIF opposes the mechanistic to the reductive, while staying fully grounded in a naturalistic and information-theoretic foundation, using the principle of free energy minimization. The latter provides a theoretical basis for a unified treatment of particles, organisms, and interactive machines, spanning from the inorganic to organic, non-life to life, and natural to artificial agents. We provide a brief introduction to AIF, then explore its implications for evolutionary theory, ecological psychology, embodied phenomenology, and robotics/AI research. We conclude the paper by considering implications for machine consciousness

    Boosting children's creativity through creative interactions with social robots

    Get PDF
    Creativity is an ability with psychological and developmental benefits. Creative levels are dynamic and oscillate throughout life, with a first major decline occurring at the age of 7 years old. However, creativity is an ability that can be nurtured if trained, with evidence suggesting an increase in this ability with the use of validated creativity training. Yet, creativity training for young children (aged between 6-9 years old) appears as scarce. Additionally, existing training interventions resemble test-like formats and lack of playful dynamics that could engage children in creative practices over time. This PhD project aimed at contributing to creativity stimulation in children by proposing to use social robots as intervention tools, thus adding playful and interactive dynamics to the training. Towards this goal, we conducted three studies in schools, summer camps, and museums for children, that contributed to the design, fabrication, and experimental testing of a robot whose purpose was to re-balance creative levels. Study 1 (n = 140) aimed at testing the effect of existing activities with robots in creativity and provided initial evidence of the positive potential of robots for creativity training. Study 2 (n = 134) aimed at including children as co-designers of the robot, ensuring the robot’s design meets children’s needs and requirements. Study 3 (n = 130) investigated the effectiveness of this robot as a tool for creativity training, showing the potential of robots as creativity intervention tools. In sum, this PhD showed that robots can have a positive effect on boosting the creativity of children. This places social robots as promising tools for psychological interventions.Criatividade é uma habilidade com benefícios no desenvolvimento saudável. Os níveis de criatividade são dinâmicos e oscilam durante a vida, sendo que o primeiro maior declínio acontece aos 7 anos de idade. No entanto, a criatividade é uma habilidade que pode ser nutrida se treinada e evidências sugerem um aumento desta habilidade com o uso de programas validados de criatividade. Ainda assim, os programas de criatividade para crianças pequenas (entre os 6-9 anos de idade) são escassos. Adicionalmente, estes programas adquirem o formato parecido ao de testes, faltando-lhes dinâmicas de brincadeira e interatividade que poderão motivar as crianças a envolverem-se em práticas criativas ao longo do tempo. O presente projeto de doutoramento procurou contribuir para a estimulação da criatividade em crianças propondo usar robôs sociais como ferramenta de intervenção, adicionando dinâmicas de brincadeira e interação ao treino. Assim, conduzimos três estudos em escolas, campos de férias, e museus para crianças que contribuíram para o desenho, fabricação, e teste experimental de um robô cujo objetivo é ser uma ferramenta que contribui para aumentar os níveis de criatividade. O Estudo 1 (n = 140) procurou testar o efeito de atividade já existentes com robôs na criatividade e mostrou o potencial positivo do uso de robôs para o treino criativo. O Estudo 2 (n = 134) incluiu crianças como co-designers do robô, assegurando que o desenho do robô correspondeu às necessidades das crianças. O Estudo 2 (n = 130) investigou a eficácia deste robô como ferramenta para a criatividade, demonstrando o seu potencial para o treino da criatividade. Em suma, o presente doutoramento mostrou que os robôs poderão ter um potencial criativo em atividades com crianças. Desta forma, os robôs sociais poderão ser ferramentas promissoras em intervenções na psicologia

    Intrinsic Motivation in Computational Creativity Applied to Videogames

    Get PDF
    PhD thesisComputational creativity (CC) seeks to endow artificial systems with creativity. Although human creativity is known to be substantially driven by intrinsic motivation (IM), most CC systems are extrinsically motivated. This restricts their actual and perceived creativity and autonomy, and consequently their benefit to people. In this thesis, we demonstrate, via theoretical arguments and through applications in videogame AI, that computational intrinsic reward and models of IM can advance core CC goals. We introduce a definition of IM to contextualise related work. Via two systematic reviews, we develop typologies of the benefits and applications of intrinsic reward and IM models in CC and game AI. Our reviews highlight that related work is limited to few reward types and motivations, and we thus investigate the usage of empowerment, a little studied, information-theoretic intrinsic reward, in two novel models applied to game AI. We define coupled empowerment maximisation (CEM), a social IM model, to enable general co-creative agents that support or challenge their partner through emergent behaviours. Via two qualitative, observational vignette studies on a custom-made videogame, we explore CEM’s ability to drive general and believable companion and adversary non-player characters which respond creatively to changes in their abilities and the game world. We moreover propose to leverage intrinsic reward to estimate people’s experience of interactive artefacts in an autonomous fashion. We instantiate this proposal in empowerment-based player experience prediction (EBPXP) and apply it to videogame procedural content generation. By analysing think-aloud data from an experiential vignette study on a dedicated game, we identify several experiences that EBPXP could predict. Our typologies serve as inspiration and reference for CC and game AI researchers to harness the benefits of IM in their work. Our new models can increase the generality, autonomy and creativity of next-generation videogame AI, and of CC systems in other domains

    Design and training of deep reinforcement learning agents

    Get PDF
    Deep reinforcement learning is a field of research at the intersection of reinforcement learning and deep learning. On one side, the problem that researchers address is the one of reinforcement learning: to act efficiently. A large number of algorithms were developed decades ago in this field to update value functions and policies, explore, and plan. On the other side, deep learning methods provide powerful function approximators to address the problem of representing functions such as policies, value functions, and models. The combination of ideas from these two fields offers exciting new perspectives. However, building successful deep reinforcement learning experiments is particularly difficult due to the large number of elements that must be combined and adjusted appropriately. This thesis proposes a broad overview of the organization of these elements around three main axes: agent design, environment design, and infrastructure design. Arguably, the success of deep reinforcement learning research is due to the tremendous amount of effort that went into each of them, both from a scientific and engineering perspective, and their diffusion via open source repositories. For each of these three axes, a dedicated part of the thesis describes a number of related works that were carried out during the doctoral research. The first part, devoted to the design of agents, presents two works. The first one addresses the problem of applying discrete action methods to large multidimensional action spaces. A general method called action branching is proposed, and its effectiveness is demonstrated with a novel agent, named BDQ, applied to discretized continuous action spaces. The second work deals with the problem of maximizing the utility of a single transition when learning to achieve a large number of goals. In particular, it focuses on learning to reach spatial locations in games and proposes a new method called Q-map to do so efficiently. An exploration mechanism based on this method is then used to demonstrate the effectiveness of goal-directed exploration. Elements of these works cover some of the main building blocks of agents: update methods, neural architectures, exploration strategies, replays, and hierarchy. The second part, devoted to the design of environments, also presents two works. The first one shows how various tasks and demonstrations can be combined to learn complex skill spaces that can then be reused to solve even more challenging tasks. The proposed method, called CoMic, extends previous work on motor primitives by using a single multi-clip motion capture tracking task in conjunction with complementary tasks targeting out-of-distribution movements. The second work addresses a particular type of control method vastly neglected in traditional environments but essential for animals: muscle control. An open source codebase called OstrichRL is proposed, containing a musculoskeletal model of an ostrich, an ensemble of tasks, and motion capture data. The results obtained by training a state-of-the-art agent on the proposed tasks show that controlling such a complex system is very difficult and illustrate the importance of using motion capture data. Elements of these works demonstrate the meticulous work that must go into designing environment parts such as: models, observations, rewards, terminations, resets, steps, and demonstrations. The third part, on the design of infrastructures, presents three works. The first one explains the difference between the types of time limits commonly used in reinforcement learning and why they are often treated inappropriately. In one case, tasks are time-limited by nature and a notion of time should be available to agents to maintain the Markov property of the underlying decision process. In the other case, tasks are not time-limited by nature, but time limits are used for convenience to diversify experiences. This is the most common case. It requires a distinction between time limits and environmental terminations, and bootstrapping should be performed at the end of partial episodes. The second work proposes to unify the most popular deep learning frameworks using a single library called Ivy, and provides new differentiable and framework-agnostic libraries built with it. Four such code bases are provided for gradient-based robot motion planning, mechanics, 3D vision, and differentiable continuous control environments. Finally, the third paper proposes a novel deep reinforcement learning library, called Tonic, built with simplicity and modularity in mind, to accelerate prototyping and evaluation. In particular, it contains implementations of several continuous control agents and a large-scale benchmark. Elements of these works illustrate the different components to consider when building the infrastructure for an experiment: deep learning framework, schedules, and distributed training. Added to these are the various ways to perform evaluations and analyze results for meaningful, interpretable, and reproducible deep reinforcement learning research.Open Acces

    The Machine as Art/ The Machine as Artist

    Get PDF
    The articles collected in this volume from the two companion Arts Special Issues, “The Machine as Art (in the 20th Century)” and “The Machine as Artist (in the 21st Century)”, represent a unique scholarly resource: analyses by artists, scientists, and engineers, as well as art historians, covering not only the current (and astounding) rapprochement between art and technology but also the vital post-World War II period that has led up to it; this collection is also distinguished by several of the contributors being prominent individuals within their own fields, or as artists who have actually participated in the still unfolding events with which it is concerne

    The Machine as Art/ The Machine as Artist

    Get PDF
    corecore