9 research outputs found

    Result Integrity Check for MapReduce Computation on Hybrid Clouds

    Full text link
    Abstract — Large scale adoption of MapReduce computations on public clouds is hindered by the lack of trust on the participat-ing virtual machines, because misbehaving worker nodes can compromise the integrity of the computation result. In this paper, we propose a novel MapReduce framework, Cross Cloud MapRe-duce (CCMR), which overlays the MapReduce computation on top of a hybrid cloud: the master that is in control of the entire computation and guarantees result integrity runs on a private and trusted cloud, while normal workers run on a public cloud. In order to achieve high accuracy, CCMR proposes a result integrity check scheme on both the map phase and the reduce phase, which combines random task replication, random task verification, and credit accumulation; and CCMR strives to reduce the overhead by reducing cross-cloud communication. We implement our ap-proach based on Apache Hadoop MapReduce and evaluate our implementation on Amazon EC2. Both theoretical and experi-mental analysis show that our approach can guarantee high result integrity in a normal cloud environment while incurring non-negligible performance overhead (e.g., when 16.7 % workers are malicious, CCMR can guarantee at least 99.52 % of accuracy with 33.6 % of overhead when replication probability is 0.3 and the credit threshold is 50)

    A Scheduling Algorithm for Defeating Collusion

    Get PDF
    By exploiting idle time on volunteer machines, desktop grids provide a way to execute large sets of tasks with negligible maintenance and low cost. Although desktop grids are attractive for cost-conscious projects, relying on external resources may compromise the correctness of application execution due to the well-known unreliability of nodes. In this paper, we consider the most challenging threat model: organized groups of cheaters that may collude to produce incorrect results. By using a previously described on-line algorithm for detecting collusion and characterizing the participant behaviors, we propose a scheduling algorithm that tackles collusion. Using several real-life traces, we show that our approach min- imizes redundancy while maximizing the number of correctly certified results

    A Blockchain-based Decentralized Electronic Marketplace for Computing Resources

    Get PDF
    AbstractWe propose a framework for building a decentralized electronic marketplace for computing resources. The idea is that anyone with spare capacities can offer them on this marketplace, opening up the cloud computing market to smaller players, thus creating a more competitive environment compared to today's market consisting of a few large providers. Trust is a crucial component in making an anonymized decentralized marketplace a reality. We develop protocols that enable participants to interact with each other in a fair way and show how these protocols can be implemented using smart contracts and blockchains. We discuss and evaluate our framework not only from a technical point of view, but also look at the wider context in terms of fair interactions and legal implications

    Result Verification and Trust-based Scheduling in Peer-to-Peer Grids

    No full text
    Peer-to-peer Grids that seek to harvest idle cycles available throughout the Internet are vulnerable to hosts that fraudulently accept computational tasks and then maliciously return arbitrary results. Current strategies employed by popular cooperative computing Grids, such as SETI@Home, rely heavily on task replication to check results. However, result verification through replication suffers from two potential shortcomings: (1) susceptibility to collusion in which a group of malicious hosts conspire to return the same bad results and (2) high fixed overhead incurred by running redundant copies of the task

    Trusted community : a novel multiagent organisation for open distributed systems

    Get PDF
    [no abstract

    Distributed Trust Management in Grid Computing Environments

    Get PDF
    Grid computing environments are open distributed systems in which autonomous participants collaborate with each other using specific mechanisms and protocols. In general, the participants have different aims and objectives, can join and leave the Grid environment any time, have different capabilities for offering services, and often do not have sufficient knowledge about their collaboration partners. As a result, it is quite difficult to rely on the outcome of the collaboration process. Furthermore, the overall decision whether to rely at all on a collaboration partner or not may be affected by other non-functional aspects that cannot be generally determined for every possible situation, but should rather be under the control of the user when requesting such a decision. In this thesis, the idea that trust is the major requirement for enabling collaboration among partners in Grid environments is investigated. The probability for a successful future interaction among partners is considered as closely related to the mutual trust values the partners assign to each other. Thus, the level of trust represents the level of intention of Grid participants to collaborate. Trust is classified into two categories: identity trust and behavior trust. Identity trust is concerned with verifying the authenticity of an interaction partner, whereas behavior trust deals with the trustworthiness of an interaction partner. In order to calculate the identity trust, a "small-worlds"-like scheme is proposed. The overall behavior trust of an interaction partner is built up by considering several factors, such as accuracy or reliability. These factors of behavior trust are continuously tested and verified. In this way, a history of past collaborations that is used for future decisions on further collaborations between collaboration partners is collected. This kind of experience is also shared as recommendations to other participants. An interesting problem analysed is the difficulty of discovering the "real" behavior of an interaction partner from the "observed" behavior. If there are behavioral deviations, then it is not clear under what circumstances the deviating behavior of a partner is going to be tolerated. Issues involved in managing behavior trust of Grid participants are investigated and an approach based on the idea of using statistical methods of quality assurance for identifying the "real" behavior of a participant during an interaction and for "keeping" the behavior of the participants "in-control" is proposed. Another problem addressed is the security in Grid environments. Grids are designed to provide access and control over enormous remote computational resources, storage devices and scientific instruments. The information exchanged, saved or processed can be quite valuable and thus, a Grid is an attractive target for attacks to extract this information. Here, the confidentiality of the communication between Grid participants, together with issues related to authorization, integrity, management and non-repudiation are considered. A hybrid message level encryption scheme for securing the communication between Grid participants is proposed. It is based on a combination of two asymmetric cryptographic techniques, a variant of Public Key Infrastructure (PKI) and Certificateless Public Key Cryptography (CL-PKC). The different methods to trust management are implemented on a simulation infrastructure. The proposed system architecture can be configured to the domain specific trust requirements by the use of several separate trust profiles covering the entire lifecycle of trust establishment and management. Different experiments illustrate further how Grid participants can build, manage and evolve trust between them in order to have a successful collaboration. Although the approach is basically conceived for Grid environments, it is generic enough to be used for establishing and managing trust in many Grid-like distributed environments
    corecore