7 research outputs found

    New Methods of Uncheatable Grid Computing

    Get PDF
    Grid computing is the collection of computer resources from multiple locations to reach a common goal. According to the task publisher's computing power, we will classify the deceptive detection schemes into two categories, and then analyze the security of deceptive detection schemes based on the characteristics of computational task function. On the basis of double check, we proposed an improved scheme at the cost of time sacrifice called the secondary allocation scheme of double check. In our scheme, the security of double check has been greatly strengthened. Finally, we analyzed the common problem of High-Value Rare Events, improved the deceptive detection scheme due to [1], and then put forward a new deceptive detection scheme with better security and efficiency. This paper is revised and expanded version of a paper entitled "Deceptive Detection and Security Reinforcement in Grid Computing" [2] presented at 2013 5th International Conference on Intelligent Networking and Collaborative Systems, Xi'an city, Shanxi province, China, September 9-11, 2013

    Pipelined Algorithms to Detect Cheating in Long-Term Grid Computations

    Get PDF
    This paper studies pipelined algorithms for protecting distributed grid computations from cheating participants, who wish to be rewarded for tasks they receive but don't perform. We present improved cheater detection algorithms that utilize natural delays that exist in long-term grid computations. In particular, we partition the sequence of grid tasks into two interleaved sequences of task rounds, and we show how to use those rounds to devise the first general-purpose scheme that can catch all cheaters, even when cheaters collude. The main idea of this algorithm might at first seem counter-intuitive--we have the participants check each other's work. A naive implementation of this approach would, of course, be susceptible to collusion attacks, but we show that by, adapting efficient solutions to the parallel processor diagnosis problem, we can tolerate collusions of lazy cheaters, even if the number of such cheaters is a fraction of the total number of participants. We also include a simple economic analysis of cheaters in grid computations and a parameterization of the main deterrent that can be used against them--the probability of being caught.Comment: Expanded version with an additional figure; ISSN 0304-397

    Pinocchio: Nearly practical verifiable computation

    Get PDF
    Abstract To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60Ă—. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol

    On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining

    Get PDF
    Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools — the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin cryptocurrency which attracts computing power roughly equivalent to billions of desk- top machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the “block withholding attack”. This attack is a topic of debate, initially thought to be ill-incentivized in today’s pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy—that is—in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, a part of the Bitcoin network is incentivized to waste resource competing for higher selfish reward

    Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    Get PDF
    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has established new methods to observe and characterize Desktop Grid resources and developed experimental platforms to test and validate our approach in conditions close to reality. The second line of research has focused on integrating Desk- top Grids in e-science Grid infrastructure (e.g. EGI), which requires to address many challenges such as security, scheduling, quality of service, and more. The third direction has investigated how to support large-scale data management and data intensive applica- tions on such infrastructures, including support for the new and emerging data-oriented programming models.This manuscript not only reports on the scientific achievements and the technologies developed to support our objectives, but also on the international collaborations and projects I have been involved in, as well as the scientific mentoring which motivates my candidature for the Habilitation `a Diriger les Recherches

    Searching for High-Value Rare Events with Uncheatable Grid Computing

    Get PDF
    goodrich(at)acm.org Abstract. High-value rare-event searching is arguably the most natural application of grid computing, where computational tasks are distributed to a large collection of clients (which comprise the computation grid) in such a way that clients are rewarded for performing tasks assigned to them. Although natural, rare-event searching presents significant challenges for a computation supervisor, who partitions and distributes the search space out to clients while contending with “lazy” clients, who don’t do all their tasks, and “hoarding ” clients, who don’t report rare events back to the supervisor. We provide schemes, based on a technique we call chaff injection, for efficiently performing uncheatable grid computing in the context of searching for high-value rare events in the presence of coalitions of lazy and hoarding clients

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    corecore