31,713 research outputs found

    Getting Things Done: The Science behind Stress-Free Productivity

    Get PDF
    Allen (2001) proposed the “Getting Things Done” (GTD) method for personal productivity enhancement, and reduction of the stress caused by information overload. This paper argues that recent insights in psychology and cognitive science support and extend GTD’s recommendations. We first summarize GTD with the help of a flowchart. We then review the theories of situated, embodied and distributed cognition that purport to explain how the brain processes information and plans actions in the real world. The conclusion is that the brain heavily relies on the environment, to function as an external memory, a trigger for actions, and a source of affordances, disturbances and feedback. We then show how these principles are practically implemented in GTD, with its focus on organizing tasks into “actionable” external memories, and on opportunistic, situation-dependent execution. Finally, we propose an extension of GTD to support collaborative work, inspired by the concept of stigmergy

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    Outline of a sensory-motor perspective on intrinsically moral agents

    Get PDF
    This is the accepted version of the following article: Christian Balkenius, Lola Cañamero, Philip PĂ€rnamets, Birger Johansson, Martin V Butz, and Andreas Olson, ‘Outline of a sensory-motor perspective on intrinsically moral agents’, Adaptive Behaviour, Vol 24(5): 306-319, October 2016, which has been published in final form at DOI: https://doi.org/10.1177/1059712316667203 Published by SAGE ©The Author(s) 2016We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.Peer reviewedFinal Accepted Versio

    Empowerment for Continuous Agent-Environment Systems

    Full text link
    This paper develops generalizations of empowerment to continuous states. Empowerment is a recently introduced information-theoretic quantity motivated by hypotheses about the efficiency of the sensorimotor loop in biological organisms, but also from considerations stemming from curiosity-driven learning. Empowemerment measures, for agent-environment systems with stochastic transitions, how much influence an agent has on its environment, but only that influence that can be sensed by the agent sensors. It is an information-theoretic generalization of joint controllability (influence on environment) and observability (measurement by sensors) of the environment by the agent, both controllability and observability being usually defined in control theory as the dimensionality of the control/observation spaces. Earlier work has shown that empowerment has various interesting and relevant properties, e.g., it allows us to identify salient states using only the dynamics, and it can act as intrinsic reward without requiring an external reward. However, in this previous work empowerment was limited to the case of small-scale and discrete domains and furthermore state transition probabilities were assumed to be known. The goal of this paper is to extend empowerment to the significantly more important and relevant case of continuous vector-valued state spaces and initially unknown state transition probabilities. The continuous state space is addressed by Monte-Carlo approximation; the unknown transitions are addressed by model learning and prediction for which we apply Gaussian processes regression with iterated forecasting. In a number of well-known continuous control tasks we examine the dynamics induced by empowerment and include an application to exploration and online model learning

    Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis

    Get PDF
    One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost

    The development of the five mini-theories of self-determination theory: an historical overview, emerging trends, and future directions

    Get PDF
    Self-determination theory is a macro-theory of human motivation, emotion, and personality that has been under development for 40 years following the seminal work of Edward Deci and Richard Ryan. Self-determination theory (SDT; Deci & Ryan, 1985b, 2000; Niemiec, Ryan, & Deci, in press; Ryan & Deci, 2000; Vansteenkiste, Ryan, & Deci, 2008) has been advanced in a cumulative, research-driven manner, as new ideas have been naturally and steadily integrated into the theory following sufficient empirical support, which has helped SDT maintain its internal consistency. To use a metaphor, the development of SDT is similar to the construction of a puzzle. Over the years, new pieces have been added to the theory once their fit was determined. At present, dozens of scholars throughout the world continue to add their piece to the ‘‘SDT puzzle,’’ and hundreds of practitioners working with all age groups, and in various domains and cultures, have used SDT to inform their practice. Herein, we provide an historical overview of the development of the five mini-theories (viz., cognitive evaluation theory, organismic integration theory, causality orientations theory, basic needs theory, and goal content theory) that constitute SDT, discuss emerging trends within those mini-theories, elucidate similarities with and differences from other theoretical frameworks, and suggest directions for future researc

    A transdisciplinary view on curiosity beyond linguistic humans: animals, infants, and artificial intelligence

    Get PDF
    ABSTRACTCuriosity is a core driver for life‐long learning, problem‐solving and decision‐making. In a broad sense, curiosity is defined as the intrinsically motivated acquisition of novel information. Despite a decades‐long history of curiosity research and the earliest human theories arising from studies of laboratory rodents, curiosity has mainly been considered in two camps: ‘linguistic human’ and ‘other’. This is despite psychology being heritable, and there are many continuities in cognitive capacities across the animal kingdom. Boundary‐pushing cross‐disciplinary debates on curiosity are lacking, and the relative exclusion of pre‐linguistic infants and non‐human animals has led to a scientific impasse which more broadly impedes the development of artificially intelligent systems modelled on curiosity in natural agents. In this review, we synthesize literature across multiple disciplines that have studied curiosity in non‐verbal systems. By highlighting how similar findings have been produced across the separate disciplines of animal behaviour, developmental psychology, neuroscience, and computational cognition, we discuss how this can be used to advance our understanding of curiosity. We propose, for the first time, how features of curiosity could be quantified and therefore studied more operationally across systems: across different species, developmental stages, and natural or artificial agents

    A transdisciplinary view on curiosity beyond linguistic humans:animals, infants, and artificial intelligence

    Get PDF
    Curiosity is a core driver for life-long learning, problem-solving and decision-making. In a broad sense, curiosity is defined as the intrinsically motivated acquisition of novel information. Despite a decades-long history of curiosity research and the earliest human theories arising from studies of laboratory rodents, curiosity has mainly been considered in two camps: ‘linguistic human’ and ‘other’. This is despite psychology being heritable, and there are many continuities in cognitive capacities across the animal kingdom. Boundary-pushing cross-disciplinary debates on curiosity are lacking, and the relative exclusion of pre-linguistic infants and non-human animals has led to a scientific impasse which more broadly impedes the development of artificially intelligent systems modelled on curiosity in natural agents. In this review, we synthesize literature across multiple disciplines that have studied curiosity in non-verbal systems. By highlighting how similar findings have been produced across the separate disciplines of animal behaviour, developmental psychology, neuroscience, and computational cognition, we discuss how this can be used to advance our understanding of curiosity. We propose, for the first time, how features of curiosity could be quantified and therefore studied more operationally across systems: across different species, developmental stages, and natural or artificial agents
    • 

    corecore