25 research outputs found

    Average-case analysis of dynamic graph algorithms

    Get PDF
    We present a model for edge updates with restricted randomness in dynamic graph algorithms and a general technique for analyzing the expected running time of an update operation. This model is able to capture the average case in many applications, since (1) it allows restrictions on the set of edges which can be used for insertions and (2) the type (insertion or deletion) of each update operation is arbitrary, i.e., not random. We use our technique to analyze existing and new dynamic algorithms for the following problems: maximum cardinality matching, minimum spanning forest, connectivity, 2- edge connectivity, k-edge connectivity, k-vertex connectivity, and bipartiteness. Given a random graph G with m0 edges and n vertices and a sequence of l update operations such that the graph contains mi edges after operation i, the expected time for performing the updates for any l is O(l log(n) + sum(i=1 to l) n/sqrt(m_i)) in the case of minimum spanning forests, connectivity, 2-edge connectivity, and bipartiteness. The expected time per update operation is O(n) in the case of maximum matching. We also give improved bounds for k-edge and k-vertex connectivity. Additionally we give an insertions-only algorithm for maximum cardinality matching with worst- case O(n) amortized time per insertion

    Faster Algorithms for Edge Connectivity via Random 22-Out Contractions

    Full text link
    We provide a simple new randomized contraction approach to the global minimum cut problem for simple undirected graphs. The contractions exploit 2-out edge sampling from each vertex rather than the standard uniform edge sampling. We demonstrate the power of our new approach by obtaining better algorithms for sequential, distributed, and parallel models of computation. Our end results include the following randomized algorithms for computing edge connectivity with high probability: -- Two sequential algorithms with complexities O(mlogn)O(m \log n) and O(m+nlog3n)O(m+n \log^3 n). These improve on a long line of developments including a celebrated O(mlog3n)O(m \log^3 n) algorithm of Karger [STOC'96] and the state of the art O(mlog2n(loglogn)2)O(m \log^2 n (\log\log n)^2) algorithm of Henzinger et al. [SODA'17]. Moreover, our O(m+nlog3n)O(m+n \log^3 n) algorithm is optimal whenever m=Ω(nlog3n)m = \Omega(n \log^3 n). Within our new time bounds, whp, we can also construct the cactus representation of all minimal cuts. -- An O˜(n0.8D0.2+n0.9)\~O(n^{0.8} D^{0.2} + n^{0.9}) round distributed algorithm, where D denotes the graph diameter. This improves substantially on a recent breakthrough of Daga et al. [STOC'19], which achieved a round complexity of O˜(n11/353D1/353+n11/706)\~O(n^{1-1/353}D^{1/353} + n^{1-1/706}), hence providing the first sublinear distributed algorithm for exactly computing the edge connectivity. -- The first O(1)O(1) round algorithm for the massively parallel computation setting with linear memory per machine.Comment: algorithms and data structures, graph algorithms, edge connectivity, out-contractions, randomized algorithms, distributed algorithms, massively parallel computatio

    Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

    Full text link
    We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any nn-round interactive protocol using NN rounds over an adversarial channel that corrupts up to ρN\rho N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ\rho, communication complexity NN, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ<1/4\delta < 1/4 with a linear N=Θ(n)N = \Theta(n) communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate ρ<27\rho < \frac{2}{7} adaptively, ρ<13\rho < \frac{1}{3} if only one party is required to decode, and ρ<12\rho < \frac{1}{2} if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.Comment: preliminary versio

    Easier Parallel Programming with Provably-Efficient Runtime Schedulers

    Get PDF
    Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance to multicore architectures. However, utilizing this computational power has proved challenging for software developers. Many concurrency platforms and languages have emerged to address parallel programming challenges, yet writing correct and performant parallel code retains a reputation of being one of the hardest tasks a programmer can undertake. This dissertation will study how runtime scheduling systems can be used to make parallel programming easier. We address the difficulty in writing parallel data structures, automatically finding shared memory bugs, and reproducing non-deterministic synchronization bugs. Each of the systems presented depends on a novel runtime system which provides strong theoretical performance guarantees and performs well in practice

    On the Cost of Post-Compromise Security in Concurrent Continuous Group-Key Agreement

    Get PDF
    Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key. It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF. CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server. The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise). One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates. Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties. This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC\u2720]. The recently proposed protocol CoCoA [Alwen et al. Eurocrypt\u2722], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required. The natural question, thus, is whether CoCoA is optimal in this setting. In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary kk number of rounds, that shows that CoCoA is very close to optimal. Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain kk. We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key. We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al. Our lower bound is of order kn1+1/(k1)/log(k)k\cdot n^{1+1/(k-1)}/\log(k), where 2klog(n)2\le k\le \log(n) is the number of updates per user the protocol requires to heal. This generalizes the n2n^2 bound for k=2k=2 from Bienstock et al. This bound almost matches the kn1+2/(k1)k\cdot n^{1+2/(k-1)} or k2n1+1/(k1)k^2\cdot n^{1+1/(k-1)} efficiency we get for the variants of the CoCoA protocol also introduced in this paper
    corecore