59,806 research outputs found

    Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds

    Full text link
    We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources, including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction.Comment: Advances in Cognitive Systems 3 (2014

    Reference and the facilitation of search in spatial domains

    Get PDF
    This is a pre-final version of the article, whose official publication is expected in the winter of 2013-14.Peer reviewedPreprin

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Resolving Perception Based Problems in Human-Computer Dialogue

    Get PDF
    We investigate the effect of sensor errors on situated human­ computer dialogues. If a human user instructs a robot to perform a task in a spatial environment, errors in the robot\u27s sensor based perception of the environment may result in divergences between the user\u27s and the robot\u27s understanding of the environment. If the user and the robot communicate through a language based interface, these problems may result in complex misunderstand­ ings. In this work we investigate such situations. We set up a simulation based scenario in which a human user instructs a robot to perform a series of manipulation tasks, such as lifting, moving and re-arranging simple objects. We induce errors into the robot\u27s perception, such as misclassification of shapes and colours, and record and analyse the user\u27s attempts to resolve the problems. We evaluate a set of methods to alleviate the problems by allowing the operator to access the robot\u27s understanding of the scene. We investigate a uni-directional language based option, which is based on automatically generated scene descriptions, a visually based option, in which the system highlights objects and provides known properties, and a dialogue based assistance option. In this option the participant can a.sk simple questions about the robot\u27s perception of the scene. As a baseline condition we perform the experiment without introducing any errors. We evaluate and compare the success and problems in all four conditions. We identify and compare strategies the participants used in each condition. We find that the participants appreciate and use the information request options successfully. We find that that all options provide an improvement over the condition without information. We conclude that allowing the participants to access information about the robot\u27s perception state is an effective way to resolve problems in the dialogue
    corecore