12,490 research outputs found

    Small Cuts and Connectivity Certificates: A Fault Tolerant Approach

    Get PDF
    We revisit classical connectivity problems in the {CONGEST} model of distributed computing. By using techniques from fault tolerant network design, we show improved constructions, some of which are even "local" (i.e., with O~(1) rounds) for problems that are closely related to hard global problems (i.e., with a lower bound of Omega(Diam+sqrt{n}) rounds). Distributed Minimum Cut: Nanongkai and Su presented a randomized algorithm for computing a (1+epsilon)-approximation of the minimum cut using O~(D +sqrt{n}) rounds where D is the diameter of the graph. For a sufficiently large minimum cut lambda=Omega(sqrt{n}), this is tight due to Das Sarma et al. [FOCS \u2711], Ghaffari and Kuhn [DISC \u2713]. - Small Cuts: A special setting that remains open is where the graph connectivity lambda is small (i.e., constant). The only lower bound for this case is Omega(D), with a matching bound known only for lambda <= 2 due to Pritchard and Thurimella [TALG \u2711]. Recently, Daga, Henzinger, Nanongkai and Saranurak [STOC \u2719] raised the open problem of computing the minimum cut in poly(D) rounds for any lambda=O(1). In this paper, we resolve this problem by presenting a surprisingly simple algorithm, that takes a completely different approach than the existing algorithms. Our algorithm has also the benefit that it computes all minimum cuts in the graph, and naturally extends to vertex cuts as well. At the heart of the algorithm is a graph sampling approach usually used in the context of fault tolerant (FT) design. - Deterministic Algorithms: While the existing distributed minimum cut algorithms are randomized, our algorithm can be made deterministic within the same round complexity. To obtain this, we introduce a novel definition of universal sets along with their efficient computation. This allows us to derandomize the FT graph sampling technique, which might be of independent interest. - Computation of all Edge Connectivities: We also consider the more general task of computing the edge connectivity of all the edges in the graph. In the output format, it is required that the endpoints u,v of every edge (u,v) learn the cardinality of the u-v cut in the graph. We provide the first sublinear algorithm for this problem for the case of constant connectivity values. Specifically, by using the recent notion of low-congestion cycle cover, combined with the sampling technique, we compute all edge connectivities in poly(D) * 2^{O(sqrt{log n log log n})} rounds. Sparse Certificates: For an n-vertex graph G and an integer lambda, a lambda-sparse certificate H is a subgraph H subseteq G with O(lambda n) edges which is lambda-connected iff G is lambda-connected. For D-diameter graphs, constructions of sparse certificates for lambda in {2,3} have been provided by Thurimella [J. Alg. \u2797] and Dori [PODC \u2718] respectively using O~(D) number of rounds. The problem of devising such certificates with o(D+sqrt{n}) rounds was left open by Dori [PODC \u2718] for any lambda >= 4. Using connections to fault tolerant spanners, we considerably improve the round complexity for any lambda in [1,n] and epsilon in (0,1), by showing a construction of (1-epsilon)lambda-sparse certificates with O(lambda n) edges using only O(1/epsilon^2 * log^{2+o(1)} n) rounds

    Approximate performability and dependability analysis using generalized stochastic Petri Nets

    Get PDF
    Since current day fault-tolerant and distributed computer and communication systems tend to be large and complex, their corresponding performability models will suffer from the same characteristics. Therefore, calculating performability measures from these models is a difficult and time-consuming task.\ud \ud To alleviate the largeness and complexity problem to some extent we use generalized stochastic Petri nets to describe to models and to automatically generate the underlying Markov reward models. Still however, many models cannot be solved with the current numerical techniques, although they are conveniently and often compactly described.\ud \ud In this paper we discuss two heuristic state space truncation techniques that allow us to obtain very good approximations for the steady-state performability while only assessing a few percent of the states of the untruncated model. For a class of reversible models we derive explicit lower and upper bounds on the exact steady-state performability. For a much wider class of models a truncation theorem exists that allows one to obtain bounds for the error made in the truncation. We discuss this theorem in the context of approximate performability models and comment on its applicability. For all the proposed truncation techniques we present examples showing their usefulness

    Storage and Search in Dynamic Peer-to-Peer Networks

    Full text link
    We study robust and efficient distributed algorithms for searching, storing, and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to guarantee, despite high node churn rate, that a large number of nodes in the network can store, retrieve, and maintain a large number of data items. Our main contributions are fast randomized distributed algorithms that guarantee the above with high probability (whp) even under high adversarial churn: 1. A randomized distributed search algorithm that (whp) guarantees that searches from as many as no(n)n - o(n) nodes (nn is the stable network size) succeed in O(logn){O}(\log n)-rounds despite O(n/log1+δn){O}(n/\log^{1+\delta} n) churn, for any small constant δ>0\delta > 0, per round. We assume that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time, but is oblivious to the random choices made by the algorithm). 2. A storage and maintenance algorithm that guarantees (whp) data items can be efficiently stored (with only Θ(logn)\Theta(\log{n}) copies of each data item) and maintained in a dynamic P2P network with churn rate up to O(n/log1+δn){O}(n/\log^{1+\delta} n) per round. Our search algorithm together with our storage and maintenance algorithm guarantees that as many as no(n)n - o(n) nodes can efficiently store, maintain, and search even under O(n/log1+δn){O}(n/\log^{1+\delta} n) churn per round. Our algorithms require only polylogarithmic in nn bits to be processed and sent (per round) by each node. To the best of our knowledge, our algorithms are the first-known, fully-distributed storage and search algorithms that provably work under highly dynamic settings (i.e., high churn rates per step).Comment: to appear at SPAA 201
    corecore