3 research outputs found
Crowdsourcing the Perception of Machine Teaching
Teachable interfaces can empower end-users to attune machine learning systems
to their idiosyncratic characteristics and environment by explicitly providing
pertinent training examples. While facilitating control, their effectiveness
can be hindered by the lack of expertise or misconceptions. We investigate how
users may conceptualize, experience, and reflect on their engagement in machine
teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk.
Using a performance-based payment scheme, Mechanical Turkers (N = 100) are
called to train, test, and re-train a robust recognition model in real-time
with a few snapshots taken in their environment. We find that participants
incorporate diversity in their examples drawing from parallels to how humans
recognize objects independent of size, viewpoint, location, and illumination.
Many of their misconceptions relate to consistency and model capabilities for
reasoning. With limited variation and edge cases in testing, the majority of
them do not change strategies on a second training attempt.Comment: 10 pages, 8 figures, 5 tables, CHI2020 conferenc
Recommended from our members
The Care Work of Access
Current approaches to AI and Assistive Technology (AT) often foreground task completion over other encounters such as expressions of care. Our paper challenges and complements such task-completion approaches by attending to the care work of accessāthe continual affective and emotional adjustments that people make by noticing and attending to one another. We explore how this work impacts encounters among people with and without vision impairments who complete tasks together. We find that bound up in attempts to get things done are concerns for one another and how well people are doing together. Reading this work through emerging disability studies and feminist STS scholarship, we account for two important forms of work that give rise to access: (1) mundane attunements and (2) noninnocent authorizations. Together these processes work as sensitizing concepts to help HCI scholars account for the ways that intelligent ATs both produce access while sometimes subverting people with disabilities
Recommended from our members
Image captioning and visual question answering with external knowledge
The ļ¬elds of computer vision and natural language processing have made signiļ¬cant advances in visual question answering (VQA) and image captioning. However, a limitation of models in use today is they typically perform poorly when the task requires common sense or external knowledge. Motivated by this observation, this work oļ¬ers an exploration of the beneļ¬ts of multi-source external knowledge for these two tasks. Three kinds of external knowledge are evaluated: knowledge base, reverse image search, and image search by text. This work demonstrates the advantage of these external knowledge sources via experiments on two image captioning datasets (COCO-Captions and VizWiz-Captions) and three visual question answering datasets (VQAv2,
VizWiz-VQA, and OK-VQA).Informatio