65,987 research outputs found
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
QDEE: Question Difficulty and Expertise Estimation in Community Question Answering Sites
In this paper, we present a framework for Question Difficulty and Expertise
Estimation (QDEE) in Community Question Answering sites (CQAs) such as Yahoo!
Answers and Stack Overflow, which tackles a fundamental challenge in
crowdsourcing: how to appropriately route and assign questions to users with
the suitable expertise. This problem domain has been the subject of much
research and includes both language-agnostic as well as language conscious
solutions. We bring to bear a key language-agnostic insight: that users gain
expertise and therefore tend to ask as well as answer more difficult questions
over time. We use this insight within the popular competition (directed) graph
model to estimate question difficulty and user expertise by identifying key
hierarchical structure within said model. An important and novel contribution
here is the application of "social agony" to this problem domain. Difficulty
levels of newly posted questions (the cold-start problem) are estimated by
using our QDEE framework and additional textual features. We also propose a
model to route newly posted questions to appropriate users based on the
difficulty level of the question and the expertise of the user. Extensive
experiments on real world CQAs such as Yahoo! Answers and Stack Overflow data
demonstrate the improved efficacy of our approach over contemporary
state-of-the-art models. The QDEE framework also allows us to characterize user
expertise in novel ways by identifying interesting patterns and roles played by
different users in such CQAs.Comment: Accepted in the Proceedings of the 12th International AAAI Conference
on Web and Social Media (ICWSM 2018). June 2018. Stanford, CA, US
Evaluating Singleplayer and Multiplayer in Human Computation Games
Human computation games (HCGs) can provide novel solutions to intractable
computational problems, help enable scientific breakthroughs, and provide
datasets for artificial intelligence. However, our knowledge about how to
design and deploy HCGs that appeal to players and solve problems effectively is
incomplete. We present an investigatory HCG based on Super Mario Bros. We used
this game in a human subjects study to investigate how different social
conditions---singleplayer and multiplayer---and scoring
mechanics---collaborative and competitive---affect players' subjective
experiences, accuracy at the task, and the completion rate. In doing so, we
demonstrate a novel design approach for HCGs, and discuss the benefits and
tradeoffs of these mechanics in HCG design.Comment: 10 pages, 4 figures, 2 table
MOOCs Meet Measurement Theory: A Topic-Modelling Approach
This paper adapts topic models to the psychometric testing of MOOC students
based on their online forum postings. Measurement theory from education and
psychology provides statistical models for quantifying a person's attainment of
intangible attributes such as attitudes, abilities or intelligence. Such models
infer latent skill levels by relating them to individuals' observed responses
on a series of items such as quiz questions. The set of items can be used to
measure a latent skill if individuals' responses on them conform to a Guttman
scale. Such well-scaled items differentiate between individuals and inferred
levels span the entire range from most basic to the advanced. In practice,
education researchers manually devise items (quiz questions) while optimising
well-scaled conformance. Due to the costly nature and expert requirements of
this process, psychometric testing has found limited use in everyday teaching.
We aim to develop usable measurement models for highly-instrumented MOOC
delivery platforms, by using participation in automatically-extracted online
forum topics as items. The challenge is to formalise the Guttman scale
educational constraint and incorporate it into topic models. To favour topics
that automatically conform to a Guttman scale, we introduce a novel
regularisation into non-negative matrix factorisation-based topic modelling. We
demonstrate the suitability of our approach with both quantitative experiments
on three Coursera MOOCs, and with a qualitative survey of topic
interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201
The Team Balancing Act - Enhancing Knowledge - Building Activity in On-Line Learning Communities
Online learning in the university sector is a given. Constructivist views of learning (often team based) and the notion of knowledge-building, mediated through the use of ICTs seemingly address many of the imperatives to equip individuals for emergent knowledge-age work practice. While teamwork has many perceived advantages, teams also inexplicably fail despite the apparent quality of the participants. Teams are successful when members address what is a relatively narrow range of actions. However, even within this limited range of actions individuals demonstrate definite preferences towards certain activities and roles. This paper reports on the findings from a study that investigated if knowledge-building activity can be enhanced in tertiary education CSCL environments through the use of groups balanced by Team Role Preference (Margerison & McCann, 1995, 1998). The study found that higher quality knowledge-building activity was more likely to occur in balanced groups than in random groups. The analysis of data revealed that a diversity of ideas was more likely to emerge from within balanced groups than from within random groups particularly when the random groups were heavily skewed towards one team role preference. This provided a compelling reason for explaining why balanced groups may lead to better knowledge-building activity
- …