2 research outputs found

    Technical Report: Estimating Reliability of Workers for Cooperative Distributed Computing

    Full text link
    Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. For the problem of using network supercomputing to perform a large collection of independent tasks, prior work introduced a decentralized approach and provided randomized synchronous algorithms that perform all tasks correctly with high probability, while dealing with misbehaving or crash-prone processors. The main weaknesses of existing algorithms is that they assume either that the \emph{average} probability of a non-crashed processor returning incorrect results is inferior to 12\frac{1}{2}, or that the probability of returning incorrect results is known to \emph{each} processor. Here we present a randomized synchronous distributed algorithm that tightly estimates the probability of each processor returning correct results. Starting with the set PP of nn processors, let FF be the set of processors that crash. Our algorithm estimates the probability pip_i of returning a correct result for each processor i∈Pβˆ’Fi \in P-F, making the estimates available to all these processors. The estimation is based on the (Ο΅,Ξ΄)(\epsilon, \delta)-approximation, where each estimated probability pi~\tilde{p_i} of pip_i obeys the bound Pr[pi(1βˆ’Ο΅)≀pi~≀pi(1+Ο΅)]>1βˆ’Ξ΄{\sf Pr}[p_i(1-\epsilon) \leq \tilde{p_i} \leq p_i(1+\epsilon)] > 1 - \delta, for any constants Ξ΄>0\delta >0 and Ο΅>0\epsilon >0 chosen by the user. An important aspect of this algorithm is that each processor terminates without global coordination. We assess the efficiency of the algorithm in three adversarial models as follows. For the model where the number of non-crashed processors ∣Pβˆ’F∣|P-F| is linearly bounded the time complexity T(n)T(n) of the algorithm is Θ(log⁑n)\Theta(\log{n}), work complexity W(n)W(n) is Θ(nlog⁑n)\Theta(n\log{n}), and message complexity M(n)M(n) is Θ(nlog⁑2n)\Theta(n\log^2n)

    Technical Report: Dealing with Undependable Workers in Decentralized Network Supercomputing

    Full text link
    Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. This paper presents a new algorithm for the problem of using network supercomputing to perform a large collection of independent tasks, while dealing with undependable processors. The adversary may cause the processors to return bogus results for tasks with certain probabilities, and may cause a subset FF of the initial set of processors PP to crash. The adversary is constrained in two ways. First, for the set of non-crashed processors Pβˆ’FP-F, the \emph{average} probability of a processor returning a bogus result is inferior to 12\frac{1}{2}. Second, the adversary may crash a subset of processors FF, provided the size of Pβˆ’FP-F is bounded from below. We consider two models: the first bounds the size of Pβˆ’FP-F by a fractional polynomial, the second bounds this size by a poly-logarithm. Both models yield adversaries that are much stronger than previously studied. Our randomized synchronous algorithm is formulated for nn processors and tt tasks, with n≀tn\le t, where depending on the number of crashes each live processor is able to terminate dynamically with the knowledge that the problem is solved with high probability. For the adversary constrained by a fractional polynomial, the round complexity of the algorithm is O(tnΞ΅log⁑nlog⁑log⁑n)O(\frac{t}{n^\varepsilon}\log{n}\log{\log{n}}), its work is O(tlog⁑nlog⁑log⁑n)O(t\log{n} \log{\log{n}}) and message complexity is O(nlog⁑nlog⁑log⁑n)O(n\log{n}\log{\log{n}}). For the poly-log constrained adversary, the round complexity is O(t)O(t), work is O(tnΞ΅)O(t n^{\varepsilon}), %O(t polylog⁑n)O(t \, poly \log{n}), and message complexity is O(n1+Ξ΅)O(n^{1+\varepsilon}) %O(n polylog⁑n)O(n \, poly \log{n}). All bounds are shown to hold with high probability
    corecore