7 research outputs found

    Multi-round Master-Worker Computing: a Repeated Game Approach

    Full text link
    We consider a computing system where a master processor assigns tasks for execution to worker processors through the Internet. We model the workers decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a mixed extension of a strategic game among workers. That is, we assume that workers are rational in a game-theoretic sense, and that they randomize their strategic choice. Workers are assigned multiple tasks in subsequent rounds. We model the system as an infinitely repeated game of the mixed extension of the strategic game. In each round, the master decides stochastically whether to accept the answer of the majority or verify the answers received, at some cost. Incentives and/or penalties are applied to workers accordingly. Under the above framework, we study the conditions in which the master can reliably obtain tasks results, exploiting that the repeated games model captures the effect of long-term interaction. That is, workers take into account that their behavior in one computation will have an effect on the behavior of other workers in the future. Indeed, should a worker be found to deviate from some agreed strategic choice, the remaining workers would change their own strategy to penalize the deviator. Hence, being rational, workers do not deviate. We identify analytically the parameter conditions to induce a desired worker behavior, and we evaluate experi- mentally the mechanisms derived from such conditions. We also compare the performance of our mechanisms with a previously known multi-round mechanism based on reinforcement learning.Comment: 21 pages, 3 figure

    Pipelined Algorithms to Detect Cheating in Long-Term Grid Computations

    Get PDF
    This paper studies pipelined algorithms for protecting distributed grid computations from cheating participants, who wish to be rewarded for tasks they receive but don't perform. We present improved cheater detection algorithms that utilize natural delays that exist in long-term grid computations. In particular, we partition the sequence of grid tasks into two interleaved sequences of task rounds, and we show how to use those rounds to devise the first general-purpose scheme that can catch all cheaters, even when cheaters collude. The main idea of this algorithm might at first seem counter-intuitive--we have the participants check each other's work. A naive implementation of this approach would, of course, be susceptible to collusion attacks, but we show that by, adapting efficient solutions to the parallel processor diagnosis problem, we can tolerate collusions of lazy cheaters, even if the number of such cheaters is a fraction of the total number of participants. We also include a simple economic analysis of cheaters in grid computations and a parameterization of the main deterrent that can be used against them--the probability of being caught.Comment: Expanded version with an additional figure; ISSN 0304-397

    New Methods of Uncheatable Grid Computing

    Get PDF
    Grid computing is the collection of computer resources from multiple locations to reach a common goal. According to the task publisher's computing power, we will classify the deceptive detection schemes into two categories, and then analyze the security of deceptive detection schemes based on the characteristics of computational task function. On the basis of double check, we proposed an improved scheme at the cost of time sacrifice called the secondary allocation scheme of double check. In our scheme, the security of double check has been greatly strengthened. Finally, we analyzed the common problem of High-Value Rare Events, improved the deceptive detection scheme due to [1], and then put forward a new deceptive detection scheme with better security and efficiency. This paper is revised and expanded version of a paper entitled "Deceptive Detection and Security Reinforcement in Grid Computing" [2] presented at 2013 5th International Conference on Intelligent Networking and Collaborative Systems, Xi'an city, Shanxi province, China, September 9-11, 2013

    On Trustworthiness of CPU Usage Metering and Accounting

    Get PDF
    Abstract—In the envisaged utility computing paradigm, a user taps a service provider’s computing resources to accom-plish her tasks, without deploying the needed hardware and software in her own IT infrastructure. To make the service profitable, the service provider charges the user based on the resources consumed. A commonly billed resource is CPU usage. A key factor to ensure the success of such a business model is the trustworthiness of the resource metering scheme. In this paper, we provide a systematic study on the trustworthiness of CPU usage metering. Our results show that the metering schemes in commodity operating systems should not be used in utility computing. A dishonest server can run various attacks to cheat the users. Many of the attacks are surprisingly simple and do not even require high privileges or sophisticated techniques. To demonstrate that, we experiment with several types of attacks on Linux and show their adversarial effects. We also suggest that source integrity, execution integrity and fine-grained metering are the necessary properties for a trustworthy metering scheme in utility computing. Keywords-CPU time metering; attack; utility computing I

    Pinocchio: Nearly practical verifiable computation

    Get PDF
    Abstract To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60×. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol
    corecore