3,892 research outputs found

    Learning a world model and planning with a self-organizing, dynamic neural system

    Full text link
    We present a connectionist architecture that can learn a model of the relations between perceptions and actions and use this model for behavior planning. State representations are learned with a growing self-organizing layer which is directly coupled to a perception and a motor layer. Knowledge about possible state transitions is encoded in the lateral connectivity. Motor signals modulate this lateral connectivity and a dynamic field on the layer organizes a planning process. All mechanisms are local and adaptation is based on Hebbian ideas. The model is continuous in the action, perception, and time domain.Comment: 9 pages, see http://www.marc-toussaint.net

    Connectionist simulation of attitude learning: Asymmetries in the acquisition of positive and negative evaluations

    Get PDF
    Connectionist computer simulation was employed to explore the notion that, if attitudes guide approach and avoidance behaviors, false negative beliefs are likely to remain uncorrected for longer than false positive beliefs. In Study 1, the authors trained a three-layer neural network to discriminate "good" and "bad" inputs distributed across a two-dimensional space. "Full feedback" training, whereby connection weights were modified to reduce error after every trial, resulted in perfect discrimination. "Contingent feedback," whereby connection weights were only updated following outputs representing approach behavior, led to several false negative errors (good inputs misclassified as bad). In Study 2, the network was redesigned to distinguish a system for learning evaluations from a mechanism for selecting actions. Biasing action selection toward approach eliminated the asymmetry between learning of good and bad inputs under contingent feedback. Implications for various attitudinal phenomena and biases in social cognition are discussed

    The Fallacy of the Homuncular Fallacy

    Get PDF
    A leading theoretical framework for naturalistic explanation of mind holds that we explain the mind by positing progressively "stupider" capacities ("homunculi") until the mind is "discharged" by means of capacities that are not intelligent at all. The so-called homuncular fallacy involves violating this procedure by positing the same capacities at subpersonal levels. I argue that the homuncular fallacy is not a fallacy, and that modern-day homunculi are idle posits. I propose an alternative view of what naturalism requires that reflects how the cognitive sciences are actually integrating mind and matter

    Computational Modeling and Simulation of Attitude Change. Part 1, Connectionist Models and Simulations of Cognitive Dissonance: an Overview

    Get PDF
    Cognitive Dissonance Theory is considered part of the cognitive consistency theories in Social Psychology. They uncover a class of conceptual models which describe the attitude change as a cognitive consistency-seeking issue. As these conceptual models requested more complex operational expression, algebraic, mathematical and, lately, computational modeling approaches of cognitive consistency have been developed. Part 1 of this work provides an overview of the connectionist modeling of cognitive dissonance. At their time, these modeling approaches have revealed that a Computational Social Psychology project would acquire the community recognition as a new scientific discipline. This work provides an overview of the first computational models developed for the Cognitive Dissonance Theory. They are connectionist models based eitheron on the constraint satisfaction paradigm or on the attributional theory.Three models are described: Consonance Model (Shultz and Lepper, 1996), Adaptive Connectionist Model for Cognitive Dissonance (Van Owervalle and Joders, 2002), and the Recurrent Neural Network Model for long-term attitude change resulting from cognitive dissonance reduction (Read and Monroe, 2007). These models, and some others, proved from the very beginning the considerable potential for the development of cognitive modeling of the theories of cognitive dissonance. Revisiting the Cognitive Dissonance Theory once again only proves that this potential is even larger than expected

    Session 5: Development, Neuroscience and Evolutionary Psychology

    Get PDF
    Proceedings of the Pittsburgh Workshop in History and Philosophy of Biology, Center for Philosophy of Science, University of Pittsburgh, March 23-24 2001 Session 5: Development, Neuroscience and Evolutionary Psycholog

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    What working memory is for

    Get PDF

    A Connectionist Theory of Phenomenal Experience

    Get PDF
    When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches vehicle and process theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways

    Classical Computational Models

    Get PDF
    • 

    corecore