60,987 research outputs found

    Semiotic Dynamics Solves the Symbol Grounding Problem

    Get PDF
    Language requires the capacity to link symbols (words, sentences) through the intermediary of internal representations to the physical world, a process known as symbol grounding. One of the biggest debates in the cognitive sciences concerns the question how human brains are able to do this. Do we need a material explanation or a system explanation? John Searle's well known Chinese Room thought experiment, which continues to generate a vast polemic literature of arguments and counter-arguments, has argued that autonomously establishing internal representations of the world (called 'intentionality' in philosophical parlance) is based on special properties of human neural tissue and that consequently an artificial system, such as an autonomous physical robot, can never achieve this. Here we study the Grounded Naming Game as a particular example of symbolic interaction and investigate a dynamical system that autonomously builds up and uses the semiotic networks necessary for performance in the game. We demonstrate in real experiments with physical robots that such a dynamical system indeed leads to a successful emergent communication system and hence that symbol grounding and intentionality can be explained in terms of a particular kind of system dynamics. The human brain has obviously the right mechanisms to participate in this kind of dynamics but the same dynamics can also be embodied in other types of physical systems

    Solutions and Open Challenges for the Symbol Grounding Problem

    Get PDF
    This article discusses the current progress and solutions to the symbol grounding problem and specifically identifies which aspects of the problem have been addressed and issues and scientific challenges that still require investigation. In particular, the paper suggests that of the various aspects of the symbol grounding problem, the transition from indexical representations to symbol-symbol relationships requires the most research. This analysis initiated a debate and solicited commentaries from experts in the field to gather consensus on progress and achievements and identify the challenges still open in the symbol grounding problem

    Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents

    Get PDF
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?”. We propose to look at that question through the capability for Artificial Agents (AAs) to generate meaningful information like humans. We present TT, CRA and SGP as being about generation of human-like meanings and analyse the possibility for AAs to generate such meanings. We use for that the existing Meaning Generator System (MGS) where a system submitted to a constraint generates a meaning in order to satisfy its constraint. Such system approach allows comparing meaning generation in animals, humans and AAs. The comparison shows that in order to design AAs capable of generating human-like meanings, we need the possibility to transfer human constraints to AAs. That requirement raises concerns coming from the unknown natures of life and human consciousness which are at the root of human constraints. Corresponding implications for the TT, the CRA and the SGP are highlighted. The usage of the MGS shows that designing AAs capable of thinking and feeling like humans needs an understanding about the natures of life and human mind that we do not have today. Following an evolutionary approach, we propose as a first entry point an investigation about extending life to AAs in order to design AAs carrying a “stay alive” constraint.\ud Ethical concerns are raised from the relations between human constraints and human values.\ud Continuations are proposed

    The physical symbol grounding problem

    Get PDF
    This paper presents an approach to solve the symbol grounding problem within the framework of embodied cognitive science. It will be argued that symbolic structures can be used within the paradigm of embodied cognitive science by adopting an alternative definition of a symbol. In this alternative definition, the symbol may be viewed as a structural coupling between an agent's sensorimotor activations and its environment. A robotic experiment is presented in which mobile robots develop a symbolic structure from scratch by engaging in a series of language games. In this experiment it is shown that robots can develop a symbolic structure with which they can communicate the names of a few objects with a remarkable degree of success. It is further shown that, although the referents may be interpreted differently on different occasions, the objects are usually named with only one form

    Grounding oriented design

    Full text link
    University of Technology, Sydney. Faculty of Information Technology.The symbol grounding problem[67] is a longstanding, poorly understood issue that has interested philosophers, computer scientists, and cognitive scientists alike. The grounding problem, in its various guises, refers to the task of creating meaningful representations for artificial agents. After more than 15 years of widespread debate and circular introspection of the so-called symbol grounding problem we seem none-the-wiser as to what constitutes being meaningful, and indeed grounded, for an agent[l28]. We argue, in the context of practical robotics, a grounded agent possesses a representation which faithfully reflects pertinent aspects of the world. In contrast, an ungrounded agent could be, for example, delusional or suffering from hallucinations ("false positives"), overly concerned with irrelevant things (e.g. the frame problem[93]), or incapable of reliably perceiving, recognising or anticipating relevant things in a timely manner ("false negatives"). While most grounding research concerns how to develop agents which can autonomously develop their own representations (i.e. autonomous grounding), the fact all robotic systems are grounded through human design on a case-by-case, ad-hoc bass has been overlooked. This thesis presents Grounding Oriented Design - a methodology for designing and grounding the "minds" of robotic agents. Grounding Oriented Design (or, alternatively Go-Design) is a vital first step towards the development of autonomous grounding capabilities through improved understanding of the processes by which human designers ground robot minds. Grounding Oriented Design offers guidelines and processes for iteratively decomposing a robot control problem into a set of collaborating skills, together with a notation for representing and documenting skill designs. Grounding Oriented Design consists of two main phases: basic-design which involves constructing a skill-architecture, and a detailed-design in which a skill-architecture is used to design the agent's representation and decision-making processes. A groundedness framework[l43] is used for describing and assessing the groundedness of either the complete system or of individual skills. Examples of the methodology's use and benefits are provided, while suggestions for future work are discussed

    Approaching the Symbol Grounding Problem with Probabilistic Graphical Models

    Get PDF
    In order for robots to engage in dialog with human teammates, they must have the ability to map between words in the language and aspects of the external world. A solution to this symbol grounding problem (Harnad, 1990) would enable a robot to interpret commands such as “Drive over to receiving and pick up the tire pallet.” In this article we describe several of our results that use probabilistic inference to address the symbol grounding problem. Our specific approach is to develop models that factor according to the linguistic structure of a command. We first describe an early result, a generative model that factors according to the sequential structure of language, and then discuss our new framework, generalized grounding graphs (G3). The G3 framework dynamically instantiates a probabilistic graphical model for a natural language input, enabling a mapping between words in language and concrete objects, places, paths and events in the external world. We report on corpus-based experiments where the robot is able to learn and use word meanings in three real-world tasks: indoor navigation, spatial language video retrieval, and mobile manipulation.U.S. Army Research Laboratory. Collaborative Technology Alliance Program (Cooperative Agreement W911NF-10-2-0016)United States. Office of Naval Research (MURI N00014-07-1-0749

    Which symbol grounding problem should we try to solve?

    Get PDF
    Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agent
    corecore