6,616 research outputs found
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
A Computer-Based Method to Improve the Spelling of Children with Dyslexia
In this paper we present a method which aims to improve the spelling of
children with dyslexia through playful and targeted exercises. In contrast to
previous approaches, our method does not use correct words or positive examples
to follow, but presents the child a misspelled word as an exercise to solve. We
created these training exercises on the basis of the linguistic knowledge
extracted from the errors found in texts written by children with dyslexia. To
test the effectiveness of this method in Spanish, we integrated the exercises
in a game for iPad, DysEggxia (Piruletras in Spanish), and carried out a
within-subject experiment. During eight weeks, 48 children played either
DysEggxia or Word Search, which is another word game. We conducted tests and
questionnaires at the beginning of the study, after four weeks when the games
were switched, and at the end of the study. The children who played DysEggxia
for four weeks in a row had significantly less writing errors in the tests that
after playing Word Search for the same time. This provides evidence that
error-based exercises presented in a tablet help children with dyslexia improve
their spelling skills.Comment: 8 pages, ASSETS'14, October 20-22, 2014, Rochester, NY, US
Robot Composite Learning and the Nunchaku Flipping Challenge
Advanced motor skills are essential for robots to physically coexist with
humans. Much research on robot dynamics and control has achieved success on
hyper robot motor capabilities, but mostly through heavily case-specific
engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous
manner, robot learning from human demonstration (LfD) has achieved great
progress, but still has limitations handling dynamic skills and compound
actions. In this paper, we present a composite learning scheme which goes
beyond LfD and integrates robot learning from human definition, demonstration,
and evaluation. The method tackles advanced motor skills that require dynamic
time-critical maneuver, complex contact control, and handling partly soft
partly rigid objects. We also introduce the "nunchaku flipping challenge", an
extreme test that puts hard requirements to all these three aspects. Continued
from our previous presentations, this paper introduces the latest update of the
composite learning scheme and the physical success of the nunchaku flipping
challenge
Towards QoE-Driven Optimization of Multi-Dimensional Content Streaming
Whereas adaptive video streaming for 2D video is well established and frequently used in streaming services, adaptation for emerging higher-dimensional content, such as point clouds, is still a research issue. Moreover, how to optimize resource usage in streaming services that support multiple content types of different dimensions and levels of interactivity has so far not been sufficiently studied. Learning-based approaches aim to optimize the streaming experience according to user needs. They predict quality metrics and try to find system parameters maximizing them given the current network conditions. With this paper, we show how to approach content and network adaption driven by Quality of Experience (QoE) for multi-dimensional content. We describe components required to create a system adapting multiple streams of different content types simultaneously, identify research gaps and propose potential next steps
Artificial intelligence approaches for the generation and assessment of believable human-like behaviour in virtual characters
Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) can not tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA-CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition [1], and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assess- ment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.MICINN -Ministerio de Ciencia e Innovación(FCT-13-7848
A Virtual Laboratory for the Study of History and Cultural Dynamics
This article presents a Virtual Laboratory that enables the researcher to try hypothesis and confirm data analysis about different historical processes and cultural dynamics. This Virtual Cultural Laboratory (VCL) is developed using agent-based modeling technology. Individuals' tendencies and preferences as well as the behavior of cultural objects in the transformation of cultural information are taken into consideration. In addition, the effect of local interactions at different scales over time and space is visualized through the VCL interface. Information repositories, cultural items, borders, population size, individual' tendencies and other features are determined by the user. Finally, the researcher can also isolate specific factors whose effect on the global system might be of interest to the researcher. All the code can be found at http://projects.cultureplex.ca/Cultural Dynamics, Cultural Complexity, Multi-Agent Based Simulation, Netlogo, Virtual Laboratory
Progressive co-adaptation in human-machine interaction
In this paper we discuss the concept of co-adaptation between a human operator and a machine interface and we summarize its application with emphasis on two different domains, teleoperation and assistive technology. The analysis of the literature reveals that only in few cases the possibility of a temporal evolution of the co-adaptation parameters has been considered. In particular, it has been overlooked the role of time-related indexes that capture changes in motor and cognitive abilities of the human operator. We argue that for a more effective long-term co-adaptation process, the interface should be able to predict and adjust its parameters according to the evolution of human skills and performance. We thus propose a novel approach termed progressive co-adaptation, whereby human performance is continuously monitored and the system makes inferences about changes in the users' cognitive and motor skills. We illustrate the features of progressive co-adaptation in two possible applications, robotic telemanipulation and active vision for the visually impaired
- …