2,241 research outputs found
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings
We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.Comment: 10 pages, RoboNLP Workshop from ACL Conferenc
Towards the Use of Dialog Systems to Facilitate Inclusive Education
Continuous advances in the development of information technologies have currently led to the possibility
of accessing learning contents from anywhere, at anytime, and almost instantaneously. However,
accessibility is not always the main objective in the design of educative applications, specifically to
facilitate their adoption by disabled people. Different technologies have recently emerged to foster the
accessibility of computers and new mobile devices, favoring a more natural communication between
the student and the developed educative systems. This chapter describes innovative uses of multimodal
dialog systems in education, with special emphasis in the advantages that they provide for creating
inclusive applications and learning activities
Collaborative trails in e-learning environments
This deliverable focuses on collaboration within groups of learners, and hence collaborative trails. We begin by reviewing the theoretical background to collaborative learning and looking at the kinds of support that computers can give to groups of learners working collaboratively, and then look more deeply at some of the issues in designing environments to support collaborative learning trails and at tools and techniques, including collaborative filtering, that can be used for analysing collaborative trails. We then review the state-of-the-art in supporting collaborative learning in three different areas â experimental academic systems, systems using mobile technology (which are also generally academic), and commercially available systems. The final part of the deliverable presents three scenarios that show where technology that supports groups working collaboratively and producing collaborative trails may be heading in the near future
On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces
Multimodal systems have attained increased attention in recent years, which has made possible important
improvements in the technologies for recognition, processing, and generation of multimodal information.
However, there are still many issues related to multimodality which are not clear, for example, the
principles that make it possible to resemble human-human multimodal communication. This chapter
focuses on some of the most important challenges that researchers have recently envisioned for future
multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable
and affective multimodal interfaces
Genisa: A web-based interactive learning environment for teaching simulation modelling
Intelligent Tutoring Systems (ITS) provide students with adaptive instruction and can facilitate the acquisition of problem solving skills in an interactive environment. This paper discusses the role of pedagogical strategies that have been implemented to facilitate the development of simulation modelling knowledge. The learning environment integrates case-based reasoning with interactive tools to guide tutorial remediation. The evaluation of the system shows that the model for pedagogical activities is a useful method for providing efficient simulation modelling instruction
From mirroring to guiding: A review of the state of art technology for supporting collaborative learning
We review systems that support the management of collaborative interaction, and propose a classification framework built on a simple model of coaching. Our framework distinguishes between mirroring systems, which display basic actions to collaborators, metacognitive tools, which represent the state of interaction via a set of key indicators, and coaching systems, which offer advice based on an interpretation of those indicators. The reviewed systems are further characterized by the type of interaction data they assimilate, the processes they use for deriving higher-level data representations, and the type of feedback they provide to users
The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings
We motivate and describe a new freely available human-human dialogue dataset
for interactive learning of visually grounded word meanings through ostensive
definition by a tutor to a learner. The data has been collected using a novel,
character-by-character variant of the DiET chat tool (Healey et al., 2003;
Mills and Healey, submitted) with a novel task, where a Learner needs to learn
invented visual attribute words (such as " burchak " for square) from a tutor.
As such, the text-based interactions closely resemble face-to-face conversation
and thus contain many of the linguistic phenomena encountered in natural,
spontaneous dialogue. These include self-and other-correction, mid-sentence
continuations, interruptions, overlaps, fillers, and hedges. We also present a
generic n-gram framework for building user (i.e. tutor) simulations from this
type of incremental data, which is freely available to researchers. We show
that the simulations produce outputs that are similar to the original data
(e.g. 78% turn match similarity). Finally, we train and evaluate a
Reinforcement Learning dialogue control agent for learning visually grounded
word meanings, trained from the BURCHAK corpus. The learned policy shows
comparable performance to a rule-based system built previously.Comment: 10 pages, THE 6TH WORKSHOP ON VISION AND LANGUAGE (VL'17
- âŠ