3 research outputs found

    From symbol grounding to socially shared embodied language knowledge

    Get PDF
    Much language-related research in cognitive robotics appeals to usage-based models of language as proposed in cognitive linguistics and developmental psychology [1, 2] that emphasise the significance of learning, embodiment and general cognitive development for human language acquisition. Over and above these issues, however, what takes centre stage in these theories are social-cognitive skills of “intention-reading” that are seen as “primary in the language acquisition process” [1] – and also as difficult to incorporate into computational models of language acquisition. The present paper addresses these concerns: we describe work in progress on a series of experiments that take steps towards closing the gap between ‘solipsistic’ symbol grounding in individual robotic agents and socially framed embodied language acquisition in learners that attend to common ground [3] with changing interlocutors

    Semantic Flexibility and Grounded Language Learning

    Get PDF
    International audienceWe explore the way that the flexibility inherent in the lexicon might be incorporated into the process by which an environmentally grounded artificial agent-a robot-acquires language. We take flexibility to indicate not only many-to-many mappings between words and extensions, but also the way that word meaning is specified in the context of a particular situation in the world. Our hypothesis is that embodiment and embededness are necessary conditions for the development of semantic representations that exhibit this flexibility. We examine this hypothesis by first very briefly reviewing work to date in the domain of grounded language learning, and then proposing two research objectives: 1) the incorporation of high-dimensional semantic representations that permit context-specific projections, and 2) an exploration of ways in which non-humanoid robots might exhibit language-learning capacities. We suggest that the experimental programme implicated by this theoretical investigation could be situated broadly within the enactivist paradigm, which approaches cognition from the perspective of agents emerging in the course of dynamic entanglements within an environment

    Spatial relation learning in complementary scenarios with deep neural networks

    Get PDF
    A cognitive agent performing in the real world needs to learn relevant concepts about its environment (e.g., objects, color, and shapes) and react accordingly. In addition to learning the concepts, it needs to learn relations between the concepts, in particular spatial relations between objects. In this paper, we propose three approaches that allow a cognitive agent to learn spatial relations. First, using an embodied model, the agent learns to reach toward an object based on simple instructions involving left-right relations. Since the level of realism and its complexity does not permit large-scale and diverse experiences in this approach, we devise as a second approach a simple visual dataset for geometric feature learning and show that recent reasoning models can learn directional relations in different frames of reference. Yet, embodied and simple simulation approaches together still do not provide sufficient experiences. To close this gap, we thirdly propose utilizing knowledge bases for disembodied spatial relation reasoning. Since the three approaches (i.e., embodied learning, learning from simple visual data, and use of knowledge bases) are complementary, we conceptualize a cognitive architecture that combines these approaches in the context of spatial relation learning
    corecore