6 research outputs found

    A match does not make a sense: On the sufficiency of the comparator model for explaining the sense of agency

    Get PDF
    Contains fulltext : 203589.pdf (publisher's version ) (Open Access)The development of a sense of agency is indispensable for a cognitive entity (biological or artificial) to become a cognitive agent. In developmental psychology, researchers have taken inspiration from adult cognitive psychology and neuroscience literature and use the comparator model to assess the presence of a sense of agency in early infancy. Similarly, robotics researchers have taken components of the proposed mechanism in attempts to build a sense of agency into artificial systems. In this article, we identify an invalidating theoretical flaw in the reasoning underlying this conversion from adult studies to developmental science and cognitive systems research, rooted in an oversight in the conceptualization of the comparator model as currently used in experimental practice. In these experiments, the emphasis has been put solely on testing for a match between predicted and observed sensory consequences. We argue that the match by itself can exclusively generate a simple categorization or a representation of equality between predicted and observed sensory consequences, both of which are insufficient to generate the causal representations required for a sense of agency. Consequently, the comparator model, as it has been described in the context of the sense of agency and as it is commonly used in experimental designs, is insufficient to generate the sense of agency: infants and robots require more than developing the ability to match predicted and observed sensory consequences for a sense of agency. We conclude with outlining possible solutions and future directions for researchers in developmental science and artificial intelligence.11 p

    Toward a Formalization of QA Problem Classes

    No full text

    Pre-Wiring and Pre-Training: What does a neural network need to learn truly general identity rules?

    No full text
    In an influential paper, Marcus et al. [1999] claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model [Elman, 1990]. In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network [Jaeger, 2001], and we show that only when combining both techniques the ESN is able to learn truly general identity rules

    Diagnostic Classifiers: Revealing how Neural Networks Process Hierarchical Structure

    No full text
    We investigate how neural networks can be used for hierarchical, compositional semantics. To this end, we define the simple but nontrivial artificial task of processing nested arithmetic expressions and study whether different types of neural networks can learn to add and subtract. We find that recursive neural networks can implement a generalising solution, and we visualise the intermediate steps: projection, summation and squashing. We also show that gated recurrent neural networks, which process the expressions incrementally, perform surprisingly well on this task: they learn to predict the outcome of the arithmetic expressions with reasonable accuracy, although performance deteriorates with increasing length. To analyse what strategy the recurrent network applies, visualisation techniques are less insightful. Therefore, we develop an approach where we formulate and test hypotheses on what strategies these networks might be following. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ’diagnostic classifiers’ to test those predictions. Our results indicate the networks follow a strategy similar to our hypothesised ’incremental strategy’

    Foundations, Implementation, Social Aspects and Applications

    No full text
    [EN]This book introduces a computationally feasible, cognitively inspired formal model of concept invention, drawing on Fauconnier and Turner's theory of conceptual blending, a fundamental cognitive operation. The chapters present the mathematical and computational foundations of concept invention, discuss cognitive and social aspects, and further describe concrete implementations and applications in the fields of musical and mathematical creativity. Featuring contributions from leading researchers in formal systems, cognitive science, artificial intelligence, computational creativity, mathematical reasoning and cognitive musicology, the book will appeal to readers interested in how conceptual blending can be precisely characterized and implemented for the development of creative computational systems.The research presented in this book was supported by the COINVENT project, which was funded by the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number 611553.Peer reviewe
    corecore