2 research outputs found
Technical Report: Estimating Reliability of Workers for Cooperative Distributed Computing
Internet supercomputing is an approach to solving partitionable,
computation-intensive problems by harnessing the power of a vast number of
interconnected computers. For the problem of using network supercomputing to
perform a large collection of independent tasks, prior work introduced a
decentralized approach and provided randomized synchronous algorithms that
perform all tasks correctly with high probability, while dealing with
misbehaving or crash-prone processors. The main weaknesses of existing
algorithms is that they assume either that the \emph{average} probability of a
non-crashed processor returning incorrect results is inferior to ,
or that the probability of returning incorrect results is known to \emph{each}
processor. Here we present a randomized synchronous distributed algorithm that
tightly estimates the probability of each processor returning correct results.
Starting with the set of processors, let be the set of processors
that crash. Our algorithm estimates the probability of returning a
correct result for each processor , making the estimates available
to all these processors. The estimation is based on the -approximation, where each estimated probability of
obeys the bound , for any constants and
chosen by the user. An important aspect of this algorithm is that each
processor terminates without global coordination. We assess the efficiency of
the algorithm in three adversarial models as follows. For the model where the
number of non-crashed processors is linearly bounded the time
complexity of the algorithm is , work complexity
is , and message complexity is
Technical Report: Dealing with Undependable Workers in Decentralized Network Supercomputing
Internet supercomputing is an approach to solving partitionable,
computation-intensive problems by harnessing the power of a vast number of
interconnected computers. This paper presents a new algorithm for the problem
of using network supercomputing to perform a large collection of independent
tasks, while dealing with undependable processors. The adversary may cause the
processors to return bogus results for tasks with certain probabilities, and
may cause a subset of the initial set of processors to crash. The
adversary is constrained in two ways. First, for the set of non-crashed
processors , the \emph{average} probability of a processor returning a
bogus result is inferior to . Second, the adversary may crash a
subset of processors , provided the size of is bounded from below. We
consider two models: the first bounds the size of by a fractional
polynomial, the second bounds this size by a poly-logarithm. Both models yield
adversaries that are much stronger than previously studied. Our randomized
synchronous algorithm is formulated for processors and tasks, with
, where depending on the number of crashes each live processor is able
to terminate dynamically with the knowledge that the problem is solved with
high probability. For the adversary constrained by a fractional polynomial, the
round complexity of the algorithm is
, its work is and message complexity is . For the
poly-log constrained adversary, the round complexity is , work is , %, and message complexity is
%. All bounds are shown to hold
with high probability