72,623 research outputs found
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings
We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.Comment: 10 pages, RoboNLP Workshop from ACL Conferenc
Recommended from our members
Multimodal and ubiquitous computing systems: supporting independent-living older users
We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a residentās home with sensors ā these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications
Towards Multi-Modal Interactions in Virtual Environments: A Case Study
We present research on visualization and interaction in a realistic model of an existing theatre. This existing āMuziekĀ¬centrumā offers its visitors information about performances by means of a yearly brochure. In addition, it is possible to get information at an information desk in the theatre (during office hours), to get information by phone (by talking to a human or by using IVR). The database of the theater holds the information that is available at the beginning of the ātheatre seasonā. Our aim is to make this information more accessible by using multi-modal accessible multi-media web pages. A more general aim is to do research in the area of web-based services, in particuĀ¬lar interactions in virtual environments
Exploring miscommunication and collaborative behaviour in human-robot interaction
This paper presents the first step in designing a speech-enabled robot that is capable of natural management of miscommunication. It describes the methods
and results of two WOz studies, in which
dyads of naĆÆve participants interacted in a
collaborative task. The first WOz study
explored human miscommunication
management. The second study investigated
how shared visual space and monitoring
shape the processes of feedback and communication in task-oriented interactions.
The results provide insights for the development of human-inspired and
robust natural language interfaces in robots
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
We present a multi-modal dialogue system for interactive learning of
perceptually grounded word meanings from a human tutor. The system integrates
an incremental, semantic parsing/generation framework - Dynamic Syntax and Type
Theory with Records (DS-TTR) - with a set of visual classifiers that are
learned throughout the interaction and which ground the meaning representations
that it produces. We use this system in interaction with a simulated human
tutor to study the effects of different dialogue policies and capabilities on
the accuracy of learned meanings, learning rates, and efforts/costs to the
tutor. We show that the overall performance of the learning agent is affected
by (1) who takes initiative in the dialogues; (2) the ability to express/use
their confidence level about visual attributes; and (3) the ability to process
elliptical and incrementally constructed dialogue turns. Ultimately, we train
an adaptive dialogue policy which optimises the trade-off between classifier
accuracy and tutoring costs.Comment: 11 pages, SIGDIAL 2016 Conferenc
Combining economic and social goals in the design of production systems by using ergonomics standards
In designing of production systems, economic and social goals can be combined, if ergonomics is integrated into the design process. More than 50 years of ergonomics research and practice have resulted in a large number of ergonomics standards for designing physical and organizational work environments. This paper gives an overview of the 174 international ISO and European CEN standards in this field, and discusses their applicability in design processes. The available standards include general recommendations for integrating ergonomics into the design process, as well as specific requirements for manual handling, mental load, task design, human-computer-interaction, noise, heat, body measurements, and other topics. The standards can be used in different phases of the design process: allocation of system functions between humans and machines, design of the work organization, work tasks and jobs, design of work environment, design of work equipment, hardware and software, and design of workspace and workstation. The paper is meant to inform engineers and managers involved in the design of production systems about the existence of a large number of ISO and CEN standards on ergonomics, which can be used to optimize human well-being and overall system performance.review;standard;standardization;ergonomics;CEN;ISO;human factors;production engineering;production planning
- ā¦