22 research outputs found
Achieving reliability and fairness in online task computing environments
Mención Internacional en el título de doctorWe consider online task computing environments such as volunteer computing platforms running
on BOINC (e.g., SETI@home) and crowdsourcing platforms such as Amazon Mechanical
Turk. We model the computations as an Internet-based task computing system under the masterworker
paradigm. A master entity sends tasks across the Internet, to worker entities willing to
perform a computational task. Workers execute the tasks, and report back the results, completing
the computational round. Unfortunately, workers are untrustworthy and might report an incorrect
result. Thus, the first research question we answer in this work is how to design a reliable masterworker
task computing system. We capture the workers’ behavior through two realistic models:
(1) the “error probability model” which assumes the presence of altruistic workers willing to
provide correct results and the presence of troll workers aiming at providing random incorrect
results. Both types of workers suffer from an error probability altering their intended response.
(2) The “rationality model” which assumes the presence of altruistic workers, always reporting
a correct result, the presence of malicious workers always reporting an incorrect result, and the
presence of rational workers following a strategy that will maximize their utility (benefit). The
rational workers can choose among two strategies: either be honest and report a correct result,
or cheat and report an incorrect result. Our two modeling assumptions on the workers’ behavior
are supported by an experimental evaluation we have performed on Amazon Mechanical Turk.
Given the error probability model, we evaluate two reliability techniques: (1) “voting” and (2)
“auditing” in terms of task assignments required and time invested for computing correctly a set
of tasks with high probability. Considering the rationality model, we take an evolutionary game
theoretic approach and we design mechanisms that eventually achieve a reliable computational
platform where the master receives the correct task result with probability one and with minimal
auditing cost. The designed mechanisms provide incentives to the rational workers, reinforcing
their strategy to a correct behavior, while they are complemented by four reputation schemes that
cope with malice. Finally, we also design a mechanism that deals with unresponsive workers by
keeping a reputation related to the workers’ response rate. The designed mechanism selects the
most reliable and active workers in each computational round. Simulations, among other, depict
the trade-off between the master’s cost and the time the system needs to reach a state where
the master always receives the correct task result. The second research question we answer in
this work concerns the fair and efficient distribution of workers among the masters over multiple computational rounds. Masters with similar tasks are competing for the same set of workers at
each computational round. Workers must be assigned to the masters in a fair manner; when the
master values a worker’s contribution the most. We consider that a master might have a strategic
behavior, declaring a dishonest valuation on a worker in each round, in an attempt to increase its
benefit. This strategic behavior from the side of the masters might lead to unfair and inefficient assignments
of workers. Applying renown auction mechanisms to solve the problem at hand can be
infeasible since monetary payments are required on the side of the masters. Hence, we present an
alternative mechanism for fair and efficient distribution of the workers in the presence of strategic
masters, without the use of monetary incentives. We show analytically that our designed mechanism
guarantees fairness, is socially efficient, and is truthful. Simulations favourably compare
our designed mechanism with two benchmark auction mechanisms.This work has been supported by IMDEA Networks Institute and the Spanish Ministry of Education grant FPU2013-03792.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: Alberto Tarable.- Secretario: José Antonio Cuesta Ruiz.- Vocal: Juan Julián Merelo Guervó
A Mechanism for Fair Distribution of Resources without Payments
We design a mechanism for Fair and Efficient Distribution of Resources
(FEDoR) in the presence of strategic agents. We consider a multiple-instances,
Bayesian setting, where in each round the preference of an agent over the set
of resources is a private information. We assume that in each of r rounds n
agents are competing for k non-identical indivisible goods, (n > k). In each
round the strategic agents declare how much they value receiving any of the
goods in the specific round. The agent declaring the highest valuation receives
the good with the highest value, the agent with the second highest valuation
receives the second highest valued good, etc. Hence we assume a decision
function that assigns goods to agents based on their valuations. The novelty of
the mechanism is that no payment scheme is required to achieve truthfulness in
a setting with rational/strategic agents. The FEDoR mechanism takes advantage
of the repeated nature of the framework, and through a statistical test is able
to punish the misreporting agents and be fair, truthful, and socially
efficient. FEDoR is fair in the sense that, in expectation over the course of
the rounds, all agents will receive the same good the same amount of times.
FEDoR is an eligible candidate for applications that require fair distribution
of resources over time. For example, equal share of bandwidth for nodes through
the same point of access. But further on, FEDoR can be applied in less trivial
settings like sponsored search, where payment is necessary and can be given in
the form of a flat participation fee. To this extent we perform a comparison
with traditional mechanisms applied to sponsored search, presenting the
advantage of FEDoR
An experimental characterization of workers" behavior and accuracy in crowdsourced tasks
Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers’ actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers’ behavior. In this work, we attempt to interpret the workers’ behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker’s level of dedication to a task, according to the task’s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.AS was supported in part by grants PGC2018-098186-B-I00 (BASIC, FEDER/MICINN- AEI, https://www.ciencia.gob.es/portal/site/MICINN/aei), PRACTICO-CM (Comunidad de Madrid, https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion), and CAVTIONS-CM-UC3M (Comunidad de Madrid/Universidad Carlos III de Madrid, https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion). AFA was supported by the Regional Government of Madrid (CM) grant 347 EdgeData-CM (P2018/TCS4499) cofounded by FSE & FEDER (https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion), NSF of China grant 61520106005 (http://www.nsfc.gov.cn/english/site_1/index.html) and the Ministry of Science and Innovation (https://www.ciencia.gob.es/portal/site/MICINN/aei) grant PID2019-109805RB-I00 (ECID) cofounded by FEDER.Publicad
Ranking a set of objects: a graph based least-square approach
We consider the problem of ranking objects starting from a set of noisy
pairwise comparisons provided by a crowd of equal workers. We assume that
objects are endowed with intrinsic qualities and that the probability with
which an object is preferred to another depends only on the difference between
the qualities of the two competitors. We propose a class of non-adaptive
ranking algorithms that rely on a least-squares optimization criterion for the
estimation of qualities. Such algorithms are shown to be asymptotically optimal
(i.e., they require comparisons
to be -PAC). Numerical results show that our schemes are
very efficient also in many non-asymptotic scenarios exhibiting a performance
similar to the maximum-likelihood algorithm. Moreover, we show how they can be
extended to adaptive schemes and test them on real-world datasets
Applying the dynamics of evolution to achieve reliability in master-worker computing
We consider Internet-based master-worker task computations, such as SETI@home, where a master process sends tasks, across the Internet, to worker processes; workers execute and report back some result. However, these workers are not trustworthy, and it might be at their best interest to report incorrect results. In such master-worker computations, the behavior and the best interest of the workers might change over time. We model such computations using evolutionary dynamics, and we study the conditions under which the master can reliably obtain task results. In particular, we develop and analyze an algorithmic mechanism based on reinforcement learning to provide workers with the necessary incentives to eventually become truthful. Our analysis identifies the conditions under which truthful behavior can be ensured and bounds the expected convergence time to that behavior. The analysis is complemented with illustrative simulations.This work is supported by the Cyprus Research Promotion Foundation grant TΠE/ΠΛHPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grants S2009TIC-1692 and MODELICO-CM, Spanish PRODIEVO and RESINEE grants and MICINN grant EC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
Achieving Reliability in Master-worker Computing via Evolutionary Dynamics
The proceeding at: 18th International Conference on Parallel and Distributed Computing, Euro-Par 2012), took place 2012, August 27-31, in Rhodes Island, Greece.This work considers Internet-based task computations in which a master process assigns tasks, over the Internet, to rational workers and collect their responses. The objective is for the master to obtain the correct task outcomes. For this purpose we formulate and study the dynamics of evolution of Internet-based master-worker computations through reinforcement learning.This work is supported by the Cyprus Research Promo-tion Foundation grant TΠE/ΠΛHPO/0609(BE)/05, NSF grants CCF-0937829, CCF-1114930, Comunidad de Madrid grant S2009TIC-1692, Spanish MOSAICO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
Crowd computing as a cooperation problem: an evolutionary approach
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.This work is supported by the Cyprus Research Promotion Foundation grant TE/HPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grant S2009TIC-1692 and MODELICO-CM, Spanish MOSAICO, PRODIEVO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
ChatGPT and Generative AI: A new era for crowdsourcing?
Generative AI systems, such as ChatGPT, have recently made their way into everyday life, setting off an alarm as to who uses them and how. Human computation via crowdsourcing has traditionally focused on problems requiring a ``human touch," problems that machines cannot (yet) solve. This work explores how Generative AI affects the present and future of crowdwork. We have conducted a large-scale, light-weight survey of crowdworkers' activities and beliefs on three popular platforms (Amazon Mechanical Turk, Prolific and Clickworker), asking 1.400 crowdworkers located across three different continents for their input. Our results not only explore the use of Generative AI tools by crowdworkers in the completion of the task, but also, document the emergence of a new type of crowdsourcing task. Additionally, we found strong evidence that the attitude of crowdworkers towards Generative AI is associated with the platform in which they operate.This project has received funding from the Cyprus Research and Innovation Foundation under grant EXCELLENCE/0421/0360
(KeepA(n)I), the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 739578 (RISE), and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy