4,037 research outputs found
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Volunteer Computing on Distributed Untrusted Nodes
The growth in size and complexity of new software systems has highlighted the need of more efficient and faster building tools. The current research relies on automation and parallelization of tasks dividing and grouping software systems in dependent software packages. Some modern building systems as Open Build Service (OBS) centralize sources commitment and dependencies solving for Linux distributions. After, they distribute these heavy build tasks among several build hosts, to finally deliver the results to the community.
The problem with these building services is that as they are usually supported by non-commercial communities, the resources to maintain the build hosts are less. Because of this, the idea of distributing these jobs among new building hosts owned by volunteers is tempting. However, carrying out this idea brings new challenges and problems to be solved, concerning the new pool of untrusted, unreliable workers.
This thesis studies how the concept of volunteer computing can be applied to software package building, specifically to OBS. In the first part, the existing platforms of volunteer computing are examined showing the current research and the pros and cons of using them for our purposes.
The research of this thesis led to a different solution called Volunteer Worker System (VWS). The main concept is to provide a centralized system that serves OBS reliable trusted workers compiling the results sent by the volunteers. Each worker acts as a proxy between the untrusted volunteers and the OBS server itself, validating by multiple cross-checking the results obtained. The volunteers from the volunteer pool are grouped to serve each surrogate depending on OBS needs.
A simple proof-of-concept of the designed system was set-up on a network distributed environment. A host acting as Volunteer System groups and dispatches jobs coming from a host simulating OBS server to several volunteer workers in separate hosts. These volunteers send back their results to the Volunteer System to validate and forward them to OBS Server.
Ensuring security on the designed solution is one of the needs to deploy the system on a real-environment. The OBS instance receiving the volunteers work needs to be sure that the Volunteer System offering them is fully trusted. Also, a whole front-end system to attract and maintain volunteers needs to be implemented
Pando: Personal Volunteer Computing in Browsers
The large penetration and continued growth in ownership of personal
electronic devices represents a freely available and largely untapped source of
computing power. To leverage those, we present Pando, a new volunteer computing
tool based on a declarative concurrent programming model and implemented using
JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying
number of failure-prone personal devices contributed by volunteers to
parallelize the application of a function on a stream of values, by using the
devices' browsers. We show that Pando can provide throughput improvements
compared to a single personal device, on a variety of compute-bound
applications including animation rendering and image processing. We also show
the flexibility of our approach by deploying Pando on personal devices
connected over a local network, on Grid5000, a French-wide computing grid in a
virtual private network, and seven PlanetLab nodes distributed in a wide area
network over Europe.Comment: 14 pages, 12 figures, 2 table
How algorithmic moderators and message type influence perceptions of online content deletion
UIDB/05021/2020 UIDP/05021/2020Hateful content online is a concern for social media platforms, policymakers, and the public. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems; however, the impact of algorithmic moderation on user perceptions is unclear. We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence user perceptions of human and algorithmic moderators. Our preregistered study encompasses representative samples (N = 2870) from the United States, the Netherlands, and Portugal. Contrary to expectations, our findings suggest that algorithmic moderation is perceived as more transparent than human, especially when no explanation is given for content removal. In addition, sending users to community guidelines for further information on content deletion has negative effects on outcome fairness and trust.publishersversionepub_ahead_of_prin
- …