5 research outputs found

    Applying the dynamics of evolution to achieve reliability in master-worker computing

    Get PDF
    We consider Internet-based master-worker task computations, such as SETI@home, where a master process sends tasks, across the Internet, to worker processes; workers execute and report back some result. However, these workers are not trustworthy, and it might be at their best interest to report incorrect results. In such master-worker computations, the behavior and the best interest of the workers might change over time. We model such computations using evolutionary dynamics, and we study the conditions under which the master can reliably obtain task results. In particular, we develop and analyze an algorithmic mechanism based on reinforcement learning to provide workers with the necessary incentives to eventually become truthful. Our analysis identifies the conditions under which truthful behavior can be ensured and bounds the expected convergence time to that behavior. The analysis is complemented with illustrative simulations.This work is supported by the Cyprus Research Promotion Foundation grant TΠE/ΠΛHPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grants S2009TIC-1692 and MODELICO-CM, Spanish PRODIEVO and RESINEE grants and MICINN grant EC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad

    Crowd computing as a cooperation problem: an evolutionary approach

    Get PDF
    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.This work is supported by the Cyprus Research Promotion Foundation grant TE/HPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grant S2009TIC-1692 and MODELICO-CM, Spanish MOSAICO, PRODIEVO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad

    The Cost of Moral Hazard and Limited Liability in the Principal-Agent Problem

    Get PDF
    Abstract. In the classical principal-agent problem, a principal hires an agent to perform a task. The principal cares about the task's output but has no control over it. The agent can perform the task at different effort intensities, and that choice affects the task's output. To provide an incentive to the agent to work hard and since his effort intensity cannot be observed, the principal ties the agent's compensation to the task's output. If both the principal and the agent are risk-neutral and no further constraints are imposed, it is well-known that the outcome of the game maximizes social welfare. In this paper we quantify the potential social-welfare loss due to the existence of limited liability, which takes the form of a minimum wage constraint. To do so we rely on the worst-case welfare loss-commonly referred to as the Price of Anarchy-which quantifies the (in)efficiency of a system when its players act selfishly (i.e., they play a Nash equilibrium) versus choosing a socially-optimal solution. Our main result establishes that under the monotone likelihood-ratio property and limited liability constraints, the worst-case welfare loss in the principal-agent model is exactly equal to the number of efforts available
    corecore