1,267 research outputs found
Calendar.help: Designing a Workflow-Based Scheduling Agent with Humans in the Loop
Although information workers may complain about meetings, they are an
essential part of their work life. Consequently, busy people spend a
significant amount of time scheduling meetings. We present Calendar.help, a
system that provides fast, efficient scheduling through structured workflows.
Users interact with the system via email, delegating their scheduling needs to
the system as if it were a human personal assistant. Common scheduling
scenarios are broken down using well-defined workflows and completed as a
series of microtasks that are automated when possible and executed by a human
otherwise. Unusual scenarios fall back to a trained human assistant who
executes them as unstructured macrotasks. We describe the iterative approach we
used to develop Calendar.help, and share the lessons learned from scheduling
thousands of meetings during a year of real-world deployments. Our findings
provide insight into how complex information tasks can be broken down into
repeatable components that can be executed efficiently to improve productivity.Comment: 10 page
Multi-object Classification via Crowdsourcing with a Reject Option
Consider designing an effective crowdsourcing system for an -ary
classification task. Crowd workers complete simple binary microtasks whose
results are aggregated to give the final result. We consider the novel scenario
where workers have a reject option so they may skip microtasks when they are
unable or choose not to respond. For example, in mismatched speech
transcription, workers who do not know the language may not be able to respond
to microtasks focused on phonological dimensions outside their categorical
perception. We present an aggregation approach using a weighted majority voting
rule, where each worker's response is assigned an optimized weight to maximize
the crowd's classification performance. We evaluate system performance in both
exact and asymptotic forms. Further, we consider the setting where there may be
a set of greedy workers that complete microtasks even when they are unable to
perform it reliably. We consider an oblivious and an expurgation strategy to
deal with greedy workers, developing an algorithm to adaptively switch between
the two based on the estimated fraction of greedy workers in the anonymous
crowd. Simulation results show improved performance compared with conventional
majority voting.Comment: two column, 15 pages, 8 figures, submitted to IEEE Trans. Signal
Proces
A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality
Microtask crowdsourcing is increasingly critical to the creation of extremely
large datasets. As a result, crowd workers spend weeks or months repeating the
exact same tasks, making it necessary to understand their behavior over these
long periods of time. We utilize three large, longitudinal datasets of nine
million annotations collected from Amazon Mechanical Turk to examine claims
that workers fatigue or satisfice over these long periods, producing lower
quality work. We find that, contrary to these claims, workers are extremely
stable in their quality over the entire period. To understand whether workers
set their quality based on the task's requirements for acceptance, we then
perform an experiment where we vary the required quality for a large
crowdsourcing task. Workers did not adjust their quality based on the
acceptance threshold: workers who were above the threshold continued working at
their usual quality level, and workers below the threshold self-selected
themselves out of the task. Capitalizing on this consistency, we demonstrate
that it is possible to predict workers' long-term quality using just a glimpse
of their quality on the first five tasks.Comment: 10 pages, 11 figures, accepted CSCW 201
Optimal Crowdsourced Classification with a Reject Option in the Presence of Spammers
We explore the design of an effective crowdsourcing system for an -ary
classification task. Crowd workers complete simple binary microtasks whose
results are aggregated to give the final decision. We consider the scenario
where the workers have a reject option so that they are allowed to skip
microtasks when they are unable to or choose not to respond to binary
microtasks. We present an aggregation approach using a weighted majority voting
rule, where each worker's response is assigned an optimized weight to maximize
crowd's classification performance.Comment: submitted to ICASSP 201
Large-Scale Microtask Programming
To make microtask programming more efficient and reduce the potential for
conflicts between contributors, I developed a new behavior-driven approach to
microtasking programming. In our approach, each microtask asks developers to
identify a behavior behavior from a high-level description of a function,
implement a unit test for it, implement the behavior, and debug it. It enables
developers to work on functions in isolation through high-level function
descriptions and stubs.
In addition, I developed the first approach for building microservices
through microtasks. Building microservices through microtasks is a good match
because our approach requires a client to first specify the functionality the
crowd will create through an API. This API can then take the form of a
microservice description. A traditional project may ask a crowd to implement a
new microservice by simply describing the desired behavior in a API and
recruiting a crowd. We implemented our approach in a web-based IDE,
\textit{Crowd Microservices}. It includes an editor for clients to describe the
system requirements through endpoint descriptions as well as a web-based
programming environment where crowd workers can identify, test, implement, and
debug behaviors. The system automatically creates, manages, assigns microtasks.
After the crowd finishes, the system automatically deploys the microservice to
a hosting site.Comment: 2 page, 1 figure, GC VL/HCC 2020, Graduate Consortiu
- …
