8 research outputs found
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
HCI Support Card: Creating and Using a Support Card for Education in Human-Computer Interaction
Support cards summarise a set of core information about a subject. The
periodic table of chemical elements and the mathematical tables are well-known
examples of support cards for didactic purposes. Technology professionals also
use support cards for recalling information such as syntactic details of
programming languages or harmonic colour palettes for designing user
interfaces. While support cards have proved useful in many contexts, little is
known about its didactic use in the Human-Computer Interaction (HCI) field. To
fill this gap, this study proposes and evaluates a process for creating and
using an HCI support card. The process considers the interdisciplinary nature
of the field, covering the syllabus, curriculum, textbooks, and students'
perception about HCI topics. The evaluation is based on case studies of
creating and using a card during a semester in two undergraduate courses:
Software Engineering and Information Systems. Results show that a support card
can help students in following the lessons, remembering and integrating the
different topics studied in the classroom. The card guides the students in
building their cognitive maps, mind maps, and concept maps to study
human-computer interaction. It fosters students' curiosity and permanent
engagement with the HCI topics. The card usefulness goes beyond the HCI
classroom, being also used by students in their professional activities and
other academic disciplines, fostering an interdisciplinary application of HCI
topics.Comment: Workshop on HCI Education (WEIHC '19
Ontology for Task and Quality Management in Crowdsourcing
This paper suggests an ontology for task and quality control mechanisms representation in crowdsourcing systems. The ontology is built to provide reasoning about tasks and quality control mechanisms to improve tasks and quality management in crowdsourcing. The ontology is formalized in OWL (Web Ontology Language) and implemented using Protégé. The developed ontology consists of 19 classes, 7 object properties, and 32 data properties. The development methodology of the ontology involves three phases including Specification (identifying scope, purpose and competency questions), Conceptualization (data dictionary, UML, and instance creation), and finally Implementation and Evaluation
Characterising Volunteers' Task Execution Patterns Across Projects on Multi-Project Citizen Science Platforms
Citizen science projects engage people in activities that are part of a
scientific research effort. On multi-project citizen science platforms,
scientists can create projects consisting of tasks. Volunteers, in turn,
participate in executing the project's tasks. Such type of platforms seeks to
connect volunteers and scientists' projects, adding value to both. However,
little is known about volunteer's cross-project engagement patterns and the
benefits of such patterns for scientists and volunteers. This work proposes a
Goal, Question, and Metric (GQM) approach to analyse volunteers' cross-project
task execution patterns and employs the Semiotic Inspection Method (SIM) to
analyse the communicability of the platform's cross-project features. In doing
so, it investigates what are the features of platforms to foster volunteers'
cross-project engagement, to what extent multi-project platforms facilitate the
attraction of volunteers to perform tasks in new projects, and to what extent
multi-project participation increases engagement on the platforms. Results from
analyses on real platforms show that volunteers tend to explore multiple
projects, but they perform tasks regularly in just a few of them; few projects
attract much attention from volunteers; volunteers recruited from other
projects on the platform tend to get more engaged than those recruited outside
the platform. System inspection shows that platforms still lack personalised
and explainable recommendations of projects and tasks. The findings are
translated into useful claims about how to design and manage multi-project
platforms.Comment: XVIII Brazilian Symposium on Human Factors in Computing Systems
(IHC'19), October 21-25, 2019, Vit\'oria, ES, Brazi