432 research outputs found

    Natural task learning through simultaneous language grounding and action learning

    Get PDF
    Artificial agents and in particular robots, i.e. agents with some form of embodiment, provide nearly unlimited possibilities to support humans in their daily lives by reliably performing hazardous, repetitive, and physically demanding tasks, removing the risk of human errors, and providing social, mental, and physical care as needed, and around-the-clock. However, for this, artificial agents need to be able to communicate with other agents, in particular humans, in a natural and efficient manner, and to autonomously learn new tasks. The most natural way for humans to tell another agent to perform a task or to explain how to perform a task is through natural language. Therefore, artificial agents need to be able to understand natural language, i.e. extract the meanings of words and phrases, which requires words and phrases to be linked to their corresponding percepts through grounding. Theoretically, groundings, i.e. connections between words and percepts, can be manually specified, however, in practice this is not possible due to the complexity and dynamicity of human-centered environments, like private homes or supermarkets, and the ambiguity inherent to natural language, e.g. synonymy and homonymy. Therefore, agents need to be able to autonomously obtain new groundings and continuously update existing groundings to account for changes in the environment and incorporate new information obtained through the agent’s sensors. Furthermore, the obtained groundings should be utilizable to learn new tasks from natural language instructions. Therefore, this thesis proposes a novel framework for simultaneous language grounding and action learning that achieves three main objectives. First, it enables agents to continuously ground synonymous words and phrases without requiring external support by another agent. Second, it enables agents to utilize external support, if available, without depending on it. Finally, it enables agents to utilize previously learned groundings to learn new tasks from language instructions

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Do (and say) as I say: Linguistic adaptation in human-computer dialogs

    Get PDF
    © Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon

    Learning linguistic constructions grounded in qualitative action models

    Get PDF
    Panzner M, Gaspers J, Cimiano P. Learning linguistic constructions grounded in qualitative action models. In: IEEE International Symposium on Robot and Human Interactive Communication. 2015

    Interactive Concept Acquisition for Embodied Artificial Agents

    Get PDF
    An important capacity that is still lacking in intelligent systems such as robots, is the ability to use concepts in a human-like manner. Indeed, the use of concepts has been recognised as being fundamental to a wide range of cognitive skills, including classification, reasoning and memory. Intricately intertwined with language, concepts are at the core of human cognition; but despite a large body or research, their functioning is as of yet not well understood. Nevertheless it remains clear that if intelligent systems are to achieve a level of cognition comparable to humans, they will have to posses the ability to deal with the fundamental role that concepts play in cognition. A promising manner in which conceptual knowledge can be acquired by an intelligent system is through ongoing, incremental development. In this view, a system is situated in the world and gradually acquires skills and knowledge through interaction with its social and physical environment. Important in this regard is the notion that cognition is embodied. As such, both the physical body and the environment shape the manner in which cognition, including the learning and use of concepts, operates. Through active partaking in the interaction, an intelligent system might influence its learning experience as to be more effective. This work presents experiments which illustrate how these notions of interaction and embodiment can influence the learning process of artificial systems. It shows how an artificial agent can benefit from interactive learning. Rather than passively absorbing knowledge, the system actively partakes in its learning experience, yielding improved learning. Next, the influence of embodiment on perception is further explored in a case study concerning colour perception, which results in an alternative explanation for the question of why human colour experience is very similar amongst individuals despite physiological differences. Finally experiments, in which an artificial agent is embodied in a novel robot that is tailored for human-robot interaction, illustrate how active strategies are also beneficial in an HRI setting in which the robot learns from a human teacher

    The Talking Heads experiment: Origins of words and meanings

    Get PDF
    The Talking Heads Experiment, conducted in the years 1999-2001, was the first large-scale experiment in which open populations of situated embodied agents created for the first time ever a new shared vocabulary by playing language games about real world scenes in front of them. The agents could teleport to different physical sites in the world through the Internet. Sites, in Antwerp, Brussels, Paris, Tokyo, London, Cambridge and several other locations were linked into the network. Humans could interact with the robotic agents either on site or remotely through the Internet and thus influence the evolving ontologies and languages of the artificial agents. The present book describes in detail the motivation, the cognitive mechanisms used by the agents, the various installations of the Talking Heads, the experimental results that were obtained, and the interaction with humans. It also provides a perspective on what happened in the field after these initial groundbreaking experiments. The book is invaluable reading for anyone interested in the history of agent-based models of language evolution and the future of Artificial Intelligence

    The Talking Heads experiment: Origins of words and meanings

    Get PDF
    The Talking Heads Experiment, conducted in the years 1999-2001, was the first large-scale experiment in which open populations of situated embodied agents created for the first time ever a new shared vocabulary by playing language games about real world scenes in front of them. The agents could teleport to different physical sites in the world through the Internet. Sites, in Antwerp, Brussels, Paris, Tokyo, London, Cambridge and several other locations were linked into the network. Humans could interact with the robotic agents either on site or remotely through the Internet and thus influence the evolving ontologies and languages of the artificial agents. The present book describes in detail the motivation, the cognitive mechanisms used by the agents, the various installations of the Talking Heads, the experimental results that were obtained, and the interaction with humans. It also provides a perspective on what happened in the field after these initial groundbreaking experiments. The book is invaluable reading for anyone interested in the history of agent-based models of language evolution and the future of Artificial Intelligence

    The Talking Heads experiment: Origins of words and meanings

    Get PDF
    The Talking Heads Experiment, conducted in the years 1999-2001, was the first large-scale experiment in which open populations of situated embodied agents created for the first time ever a new shared vocabulary by playing language games about real world scenes in front of them. The agents could teleport to different physical sites in the world through the Internet. Sites, in Antwerp, Brussels, Paris, Tokyo, London, Cambridge and several other locations were linked into the network. Humans could interact with the robotic agents either on site or remotely through the Internet and thus influence the evolving ontologies and languages of the artificial agents. The present book describes in detail the motivation, the cognitive mechanisms used by the agents, the various installations of the Talking Heads, the experimental results that were obtained, and the interaction with humans. It also provides a perspective on what happened in the field after these initial groundbreaking experiments. The book is invaluable reading for anyone interested in the history of agent-based models of language evolution and the future of Artificial Intelligence
    • 

    corecore