research

Tasker: Safely Serving Verifiable Micro-tasks for Researchers

Abstract

Paid crowdsourcing removes many traditional boundaries in conducting participant based research, however with this new tool, new instrumentation challenges have arisen for researchers. Three common challenges include: the difficulty in creating large numbers of high quality and novel tasks, verifying results of the tasks without relying on manual cheat mitigation techniques, and ensuring that the tasks adhere to the latest visual and instructional design to get high quality results. These circumstances endanger current and future research on Amazon Mechanical Turk and can result in compromised data. We introduce Tasker, a secure system architecture for serving unique tasks supported by usability principles to workers, and providing verification information concerning their completion and accuracy to researchers. This poster discusses insights from our pilot study and explorations toward methods that demonstrate a marked improvement for speed, security and robustness in developing tasks for research leveraging Amazon Mechanical Turk

    Similar works