14,775 research outputs found
A Framework for Exploring and Evaluating Mechanics in Human Computation Games
Human computation games (HCGs) are a crowdsourcing approach to solving
computationally-intractable tasks using games. In this paper, we describe the
need for generalizable HCG design knowledge that accommodates the needs of both
players and tasks. We propose a formal representation of the mechanics in HCGs,
providing a structural breakdown to visualize, compare, and explore the space
of HCG mechanics. We present a methodology based on small-scale design
experiments using fixed tasks while varying game elements to observe effects on
both the player experience and the human computation task completion. Finally
we discuss applications of our framework using comparisons of prior HCGs and
recent design experiments. Ultimately, we wish to enable easier exploration and
development of HCGs, helping these games provide meaningful player experiences
while solving difficult problems.Comment: 11 pages, 5 figure
Steering Capital: Optimizing Financial Support for Innovation in Public Education
Examines efforts to align capital to education innovation and calls for clarity and agreement on problems, goals, and metrics; an effective R&D system; an evidence-based culture of continuous improvement; and transparent, comparable, and useful data
Crowdsourcing Swarm Manipulation Experiments: A Massive Online User Study with Large Swarms of Simple Robots
Micro- and nanorobotics have the potential to revolutionize many applications
including targeted material delivery, assembly, and surgery. The same
properties that promise breakthrough solutions---small size and large
populations---present unique challenges to generating controlled motion. We
want to use large swarms of robots to perform manipulation tasks;
unfortunately, human-swarm interaction studies as conducted today are limited
in sample size, are difficult to reproduce, and are prone to hardware failures.
We present an alternative.
This paper examines the perils, pitfalls, and possibilities we discovered by
launching SwarmControl.net, an online game where players steer swarms of up to
500 robots to complete manipulation challenges. We record statistics from
thousands of players, and use the game to explore aspects of large-population
robot control. We present the game framework as a new, open-source tool for
large-scale user experiments. Our results have potential applications in human
control of micro- and nanorobots, supply insight for automatic controllers, and
provide a template for large online robotic research experiments.Comment: 8 pages, 13 figures, to appear at 2014 IEEE International Conference
on Robotics and Automation (ICRA 2014
Empowering Active Learning to Jointly Optimize System and User Demands
Existing approaches to active learning maximize the system performance by
sampling unlabeled instances for annotation that yield the most efficient
training. However, when active learning is integrated with an end-user
application, this can lead to frustration for participating users, as they
spend time labeling instances that they would not otherwise be interested in
reading. In this paper, we propose a new active learning approach that jointly
optimizes the seemingly counteracting objectives of the active learning system
(training efficiently) and the user (receiving useful instances). We study our
approach in an educational application, which particularly benefits from this
technique as the system needs to rapidly learn to predict the appropriateness
of an exercise to a particular user, while the users should receive only
exercises that match their skills. We evaluate multiple learning strategies and
user types with data from real users and find that our joint approach better
satisfies both objectives when alternative methods lead to many unsuitable
exercises for end users.Comment: To appear as a long paper in Proceedings of the 58th Annual Meeting
of the Association for Computational Linguistics (ACL 2020). Download our
code and simulated user models at github:
https://github.com/UKPLab/acl2020-empowering-active-learnin
Optimal Weighting for Exam Composition
A problem faced by many instructors is that of designing exams that
accurately assess the abilities of the students. Typically these exams are
prepared several days in advance, and generic question scores are used based on
rough approximation of the question difficulty and length. For example, for a
recent class taught by the author, there were 30 multiple choice questions
worth 3 points, 15 true/false with explanation questions worth 4 points, and 5
analytical exercises worth 10 points. We describe a novel framework where
algorithms from machine learning are used to modify the exam question weights
in order to optimize the exam scores, using the overall class grade as a proxy
for a student's true ability. We show that significant error reduction can be
obtained by our approach over standard weighting schemes, and we make several
new observations regarding the properties of the "good" and "bad" exam
questions that can have impact on the design of improved future evaluation
methods
Large Scale Learning of Agent Rationality in Two-Player Zero-Sum Games
With the recent advances in solving large, zero-sum extensive form games,
there is a growing interest in the inverse problem of inferring underlying game
parameters given only access to agent actions. Although a recent work provides
a powerful differentiable end-to-end learning frameworks which embed a game
solver within a deep-learning framework, allowing unknown game parameters to be
learned via backpropagation, this framework faces significant limitations when
applied to boundedly rational human agents and large scale problems, leading to
poor practicality. In this paper, we address these limitations and propose a
framework that is applicable for more practical settings. First, seeking to
learn the rationality of human agents in complex two-player zero-sum games, we
draw upon well-known ideas in decision theory to obtain a concise and
interpretable agent behavior model, and derive solvers and gradients for
end-to-end learning. Second, to scale up to large, real-world scenarios, we
propose an efficient first-order primal-dual method which exploits the
structure of extensive-form games, yielding significantly faster computation
for both game solving and gradient computation. When tested on randomly
generated games, we report speedups of orders of magnitude over previous
approaches. We also demonstrate the effectiveness of our model on both
real-world one-player settings and synthetic data
Beyond A/B Testing: Sequential Randomization for Developing Interventions in Scaled Digital Learning Environments
Randomized experiments ensure robust causal inference that are critical to
effective learning analytics research and practice. However, traditional
randomized experiments, like A/B tests, are limiting in large scale digital
learning environments. While traditional experiments can accurately compare two
treatment options, they are less able to inform how to adapt interventions to
continually meet learners' diverse needs. In this work, we introduce a trial
design for developing adaptive interventions in scaled digital learning
environments -- the sequential randomized trial (SRT). With the goal of
improving learner experience and developing interventions that benefit all
learners at all times, SRTs inform how to sequence, time, and personalize
interventions. In this paper, we provide an overview of SRTs, and we illustrate
the advantages they hold compared to traditional experiments. We describe a
novel SRT run in a large scale data science MOOC. The trial results
contextualize how learner engagement can be addressed through inclusive
culturally targeted reminder emails. We also provide practical advice for
researchers who aim to run their own SRTs to develop adaptive interventions in
scaled digital learning environments
- âŠ