9,631 research outputs found
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
CASP-DM: Context Aware Standard Process for Data Mining
We propose an extension of the Cross Industry Standard Process for Data
Mining (CRISPDM) which addresses specific challenges of machine learning and
data mining for context and model reuse handling. This new general
context-aware process model is mapped with CRISP-DM reference model proposing
some new or enhanced outputs
Using Technology to Encourage Self-Directed Learning: The Collaborative Lecture Annotation System
The rapidly-developing 21st century world of work and knowledge calls for self-directed lifelong (SDL) learners. While higher education must embrace the types of pedagogies that foster SDL skills in graduates, the pace of change in education can be glacial. This paper describes a social annotation technology, the Collaborative Lecture Annotation System (CLAS), that can be used to leverage existing teaching and learning practices for acquisition of 21st Century SDL skills. CLAS was designed to build upon the artifacts of traditional didactic modes of teaching, create enriched opportunities for student engagement with peers and learning materials, and offer learners greater control and ownership of their individual learning strategies. Adoption of CLAS creates educational experiences that promote and foster SDL skills: motivation, self-management and self-monitoring. In addition, CLAS incorporates a suite of learning analytics for learners to evaluate their progress, and allow instructors to monitor the development of SDL skills and identify the need for learning support and guidance. CLAS stands as an example of a simple tool that can bridge the gap between traditional transmissive pedagogy and the creation of authentic and collaborative learning spaces
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
Conducting A/B Experiments with a Scalable Architecture
A/B experiments are commonly used in research to compare the effects of
changing one or more variables in two different experimental groups - a control
group and a treatment group. While the benefits of using A/B experiments are
widely known and accepted, there is less agreement on a principled approach to
creating software infrastructure systems to assist in rapidly conducting such
experiments. We propose a four-principle approach for developing a software
architecture to support A/B experiments that is domain agnostic and can help
alleviate some of the resource constraints currently needed to successfully
implement these experiments: the software architecture (i) must retain the
typical properties of A/B experiments, (ii) capture problem solving activities
and outcomes, (iii) allow researchers to understand the behavior and outcomes
of participants in the experiment, and (iv) must enable automated analysis. We
successfully developed a software system to encapsulate these principles and
implement it in a real-world A/B experiment
- …