4,670 research outputs found

    Resolving Perception Based Problems in Human-Computer Dialogue

    Get PDF
    We investigate the effect of sensor errors on situated human­ computer dialogues. If a human user instructs a robot to perform a task in a spatial environment, errors in the robot\u27s sensor based perception of the environment may result in divergences between the user\u27s and the robot\u27s understanding of the environment. If the user and the robot communicate through a language based interface, these problems may result in complex misunderstand­ ings. In this work we investigate such situations. We set up a simulation based scenario in which a human user instructs a robot to perform a series of manipulation tasks, such as lifting, moving and re-arranging simple objects. We induce errors into the robot\u27s perception, such as misclassification of shapes and colours, and record and analyse the user\u27s attempts to resolve the problems. We evaluate a set of methods to alleviate the problems by allowing the operator to access the robot\u27s understanding of the scene. We investigate a uni-directional language based option, which is based on automatically generated scene descriptions, a visually based option, in which the system highlights objects and provides known properties, and a dialogue based assistance option. In this option the participant can a.sk simple questions about the robot\u27s perception of the scene. As a baseline condition we perform the experiment without introducing any errors. We evaluate and compare the success and problems in all four conditions. We identify and compare strategies the participants used in each condition. We find that the participants appreciate and use the information request options successfully. We find that that all options provide an improvement over the condition without information. We conclude that allowing the participants to access information about the robot\u27s perception state is an effective way to resolve problems in the dialogue

    The Role of Perception in Situated Spatial Reference

    Get PDF
    This position paper set out the argument that an interesting avenue of exploration and study of universals and variation in spatial reference is to address this topic in termsa of the universals in human perception and attention and to explore how these universals impact on spatial reference across cultures and languages

    Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings

    Full text link
    We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Reinforcement Learning (RL), must be able to handle natural conversations with human users and achieve good learning performance (accuracy) while minimising human effort in the learning process. We train and evaluate this system in interaction with a simulated human tutor, which is built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual learning task. The results show that: 1) The learned policy can coherently interact with the simulated user to achieve the goal of the task (i.e. learning visual attributes of objects, e.g. colour and shape); and 2) it finds a better trade-off between classifier accuracy and tutoring costs than hand-crafted rule-based policies, including ones with dynamic policies.Comment: 10 pages, RoboNLP Workshop from ACL Conferenc

    Markerless Vision-Based Skeleton Tracking in Therapy of Gross Motor Skill Disorders in Children

    Get PDF
    This chapter presents a research towards implementation of a computer vision system for markerless skeleton tracking in therapy of gross motor skill disorders in children suffering from mild cognitive impairment. The proposed system is based on a low-cost 3D sensor and a skeleton tracking software. The envisioned architecture is scalable in the sense that the system may be used as a stand-alone assistive tool for tracking the effects of therapy or it may be integrated with an advanced autonomous conversational agent to maintain the spatial attention of the child and to increase her motivation to undergo a long-term therapy
    • 

    corecore