3 research outputs found

    The Value of Meaning for Autonomous Robots

    Get PDF
    Abstract This paper examines the related problems of meaning and symbol grounding with respect to epigenetic robotics. While symbol grounding aims to give artificial systems "intrinsic meaning", most existing approaches to symbol grounding address meaning as a problem of categorical perception, i.e. a theory of reference for maintaining correspondence between internal representations and external entities at a linguistic level. We argue that reference is only one aspect of meaning, and that nonlinguistic and pre-verbal creatures are also meaning users. As such, we argue meaning plays an important role in an agent's value system, providing intrinsic motivation and reinforcement for life-long development and learning. Lastly, we explore how models of meaning can help shift the intellectual burden of grounding from the programmer to the program by designing robots capable of grounding themselves

    Symbol Grounding Through Cumulative Learning

    No full text
    Abstract. We suggest that the primary motivation for an agent to con-struct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve multiple tasks and extract a store of “cumulative knowledge ” that helps them to solve each new task more quickly and accurately. This cumulative knowledge then forms the ontology or meaning space of the agent. We suggest that by grounding symbols to this extracted cumu-lative knowledge agents can gain a further performance benefit because they can guide each others ’ learning process. In this version of the symbol grounding problem meanings cannot be directly communicated because they are internal to the agents, and they will be different for each agent. Also, the meanings may not correspond directly to objects in the envi-ronment. The communication process can also allow a symbol meaning mapping that is dynamic. We posit that these properties make this ver-sion of the symbol grounding problem realistic and natural. Finally, we discuss how symbols could be grounded to cumulative knowledge via a situation where a teacher selects tasks for a student to perform.

    Symbol Grounding through Cumulative Learning

    No full text
    Abstract. We suggest that the primary motivation for an agent to construct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve multiple tasks and extract a store of “cumulative knowledge ” that helps them to solve each new task more quickly and accurately. This cumulative knowledge then forms the ontology or meaning space of the agent. We suggest that by grounding symbols to this extracted cumulative knowledge agents can gain a further performance benefit because they can guide each others ’ learning process. In this version of the symbol grounding problem meanings cannot be directly communicated because they are internal to the agents, and they will be different for each agent. Also, the meanings may not correspond directly to objects in the environment. The communication process can also allow a symbol meaning mapping that is dynamic. We posit that these properties make this version of the symbol grounding problem realistic and natural. Finally, we discuss how symbols could be grounded to cumulative knowledge via a situation where a teacher selects tasks for a student to perform.
    corecore