25 research outputs found
Average-case analysis of dynamic graph algorithms
We present a model for edge updates with restricted randomness in dynamic graph algorithms and a general technique for analyzing the expected running time of an update operation. This model is able to capture the average case in many applications, since (1) it allows restrictions on the set of edges which can be used for insertions and (2) the type (insertion or deletion) of each update operation is arbitrary, i.e., not random. We use our technique to analyze existing and new dynamic algorithms for the following problems: maximum cardinality matching, minimum spanning forest, connectivity, 2- edge connectivity, k-edge connectivity, k-vertex connectivity, and bipartiteness. Given a random graph G with m0 edges and n vertices and a sequence of l update operations such that the graph contains mi edges after operation i, the expected time for performing the updates for any l is O(l log(n) + sum(i=1 to l) n/sqrt(m_i)) in the case of minimum spanning forests, connectivity, 2-edge connectivity, and bipartiteness. The expected time per update operation is O(n) in the case of maximum matching. We also give improved bounds for k-edge and k-vertex connectivity. Additionally we give an insertions-only algorithm for maximum cardinality matching with worst- case O(n) amortized time per insertion
Faster Algorithms for Edge Connectivity via Random -Out Contractions
We provide a simple new randomized contraction approach to the global minimum
cut problem for simple undirected graphs. The contractions exploit 2-out edge
sampling from each vertex rather than the standard uniform edge sampling. We
demonstrate the power of our new approach by obtaining better algorithms for
sequential, distributed, and parallel models of computation. Our end results
include the following randomized algorithms for computing edge connectivity
with high probability:
-- Two sequential algorithms with complexities and . These improve on a long line of developments including a celebrated
algorithm of Karger [STOC'96] and the state of the art algorithm of Henzinger et al. [SODA'17]. Moreover,
our algorithm is optimal whenever .
Within our new time bounds, whp, we can also construct the cactus
representation of all minimal cuts.
-- An round distributed algorithm, where D
denotes the graph diameter. This improves substantially on a recent
breakthrough of Daga et al. [STOC'19], which achieved a round complexity of
, hence providing the first sublinear
distributed algorithm for exactly computing the edge connectivity.
-- The first round algorithm for the massively parallel computation
setting with linear memory per machine.Comment: algorithms and data structures, graph algorithms, edge connectivity,
out-contractions, randomized algorithms, distributed algorithms, massively
parallel computatio
Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding
We study coding schemes for error correction in interactive communications.
Such interactive coding schemes simulate any -round interactive protocol
using rounds over an adversarial channel that corrupts up to
transmissions. Important performance measures for a coding scheme are its
maximum tolerable error rate , communication complexity , and
computational complexity.
We give the first coding scheme for the standard setting which performs
optimally in all three measures: Our randomized non-adaptive coding scheme has
a near-linear computational complexity and tolerates any error rate with a linear communication complexity. This improves over
prior results which each performed well in two of these measures.
We also give results for other settings of interest, namely, the first
computationally and communication efficient schemes that tolerate adaptively, if only one party is required to
decode, and if list decoding is allowed. These are the
optimal tolerable error rates for the respective settings. These coding schemes
also have near linear computational and communication complexity.
These results are obtained via two techniques: We give a general black-box
reduction which reduces unique decoding, in various settings, to list decoding.
We also show how to boost the computational and communication efficiency of any
list decoder to become near linear.Comment: preliminary versio
Easier Parallel Programming with Provably-Efficient Runtime Schedulers
Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance to multicore architectures. However, utilizing this computational power has proved challenging for software developers. Many concurrency platforms and languages have emerged to address parallel programming challenges, yet writing correct and performant parallel code retains a reputation of being one of the hardest tasks a programmer can undertake.
This dissertation will study how runtime scheduling systems can be used to make parallel programming easier. We address the difficulty in writing parallel data structures, automatically finding shared memory bugs, and reproducing non-deterministic synchronization bugs. Each of the systems presented depends on a novel runtime system which provides strong theoretical performance guarantees and performs well in practice
On the Cost of Post-Compromise Security in Concurrent Continuous Group-Key Agreement
Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key.
It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF.
CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server.
The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise).
One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates.
Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties.
This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC\u2720].
The recently proposed protocol CoCoA [Alwen et al. Eurocrypt\u2722], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required.
The natural question, thus, is whether CoCoA is optimal in this setting.
In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary number of rounds, that shows that CoCoA is very close to optimal.
Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain .
We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key.
We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al.
Our lower bound is of order , where is the number of updates per user the protocol requires to heal.
This generalizes the bound for from Bienstock et al.
This bound almost matches the or efficiency we get for the variants of the CoCoA protocol also introduced in this paper