8 research outputs found
Dfinity Consensus, Explored
We explore a Byzantine Consensus protocol called Dfinity Consensus, recently published in a technical report. Dfinity Consensus solves synchronous state machine replication among replicas with up to Byzantine faults. We provide a succinct explanation of the core mechanism of Dfinity Consensus to the best of our understanding. We prove the safety and liveness of the protocol specification we provide. Our complexity analysis of the protocol reveals the follows. The protocol achieves expected latency against an adaptive adversary, (where \Delta is the synchronous bound on message delay), and expected latency against a mildly adaptive adversary. In either case, the communication complexity is unbounded. We then explain how the protocol can be modified to reduce the communication complexity to in the former case, and to in the latter
Sync HotStuff: Simple and Practical Synchronous State Machine Replication
Synchronous solutions for Byzantine Fault Tolerance (BFT) can tolerate up to minority faults. In this work, we present Sync HotStuff, a surprisingly simple and intuitive synchronous BFT solution that achieves consensus with a latency of in the steady state (where is a synchronous message delay upper bound). In addition, Sync HotStuff ensures safety in a weaker synchronous model in which the synchrony assumption does not have to hold for all replicas all the time. Moreover, Sync HotStuff has optimistic responsiveness, i.e., it advances at network speed when less than one-quarter of the replicas are not responding. Borrowing from practical partially synchronous BFT solutions, Sync HotStuff has a two-phase leader-based structure, and has been fully prototyped under the standard synchrony assumption. When tolerating a single fault, Sync HotStuff achieves a throughput of over 280 Kops/sec under typical network performance, which is comparable to the best known partially synchronous solution
Verifiable Random Functions with Optimal Tightness
Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan (FOCS’99), are the public-key equivalent of pseudorandom functions. A public verification key and proofs accompanying the output enable all parties to verify the correctness of the output. However, all known standard model VRFs have a reduction loss that is much worse than what one would expect from known optimal constructions of closely related primitives like unique signatures. We show that:
1. Every security proof for a VRF that relies on a non-interactive assumption has to lose a factor of Q,
where Q is the number of adversarial queries. To that end, we extend the meta-reduction technique
of Bader et al. (EUROCRYPT’16) to also cover VRFs.
2. This raises the question: Is this bound optimal? We answer this question in the affirmative by
presenting the first VRF with a reduction from the non-interactive qDBDHI assumption to the
security of VRF that achieves this optimal loss.
We thus paint a complete picture of the achievability of tight verifiable random functions: We show that
a security loss of Q is unavoidable and present the first construction that achieves this bound
Unique Chain Rule and its Applications
Most existing Byzantine fault-tolerant State Machine Replication (SMR) protocols rely explicitly on either equivocation detection or quorum certificate formations to ensure protocol safety.
These mechanisms inherently require communication overhead among participating servers.
This work proposes the Unique Chain Rule (UCR), a simple rule for hash chains where extending a block by including its hash in the next block, is treated as a vote for the proposed block \textit{and its ancestors}.
When a block obtains a vote from at least one correct server, we can commit the block and its ancestors.
While this idea was used implicitly earlier in conjunction with equivocation detection or quorum certificate generation, this work employs it explicitly to show safety.
We present three applications of UCR.\@
We design \emph{Apollo}, and \emph{Artemis}: two novel synchronous SMR protocols with linear best-case communication complexity using round-robin, and stable leaders, respectively as the first two applications.
Next, we employ UCR in a black-box fashion toward making any SMR commits publicly verifiable, where clients will no longer have to wait for confirmations on every block, where is a security parameter and is the number of Byzantine faults tolerated by the protocol, but can instead collect a UCR proof consisting of extensions on a block.
This results in faster syncing times for clients as the publicly verifiable proofs can also be gossiped with every new block extension confirming a new block
Classical and Quantum Security of Elliptic Curve VRF, via Relative Indifferentiability
Verifiable random functions (VRFs) are essentially pseudorandom
functions for which selected outputs can be proved correct and unique,
without compromising the security of other outputs. VRFs have numerous
applications across cryptography, and in particular they have recently
been used to implement committee selection in the Algorand protocol.
Elliptic Curve VRF (ECVRF) is an elegant construction,
originally due to Papadopoulos et al., that is now under consideration
by the Internet Research Task Force. Prior work proved that ECVRF
possesses the main desired security properties of a VRF, under
suitable assumptions. However, several recent versions of ECVRF
include changes that make some of these proofs inapplicable. Moreover,
the prior analysis holds only for *classical* attackers, in the
random-oracle model (ROM); it says nothing about whether any of the
desired properties hold against *quantum* attacks, in the
quantumly accessible ROM. We note that certain important properties
of ECVRF, like uniqueness, do *not* rely on assumptions that are
known to be broken by quantum computers, so it is plausible that these
properties could hold even in the quantum setting.
This work provides a multi-faceted security analysis of recent
versions of ECVRF, in both the classical and quantum settings. First,
we motivate and formally define new security properties for VRFs, like
non-malleability and binding, and prove that recent versions of ECVRF
satisfy them (under standard assumptions). Second, we identify a
subtle obstruction in proving that recent versions of ECVRF have
*uniqueness* via prior indifferentiability definitions and
theorems, even in the classical setting. Third, we fill this gap by
defining a stronger notion called *relative indifferentiability*,
and extend prior work to show that a standard domain extender used in
ECVRF satisfies this notion, in both the classical and quantum
settings. This final contribution is of independent interest and we
believe it should be applicable elsewhere
GRandLine: Adaptively Secure DKG and Randomness Beacon with (Almost) Quadratic Communication Complexity
A randomness beacon is a source of continuous and publicly verifiable randomness which is of crucial importance for many applications. Existing works on distributed randomness beacons suffer from at least one of the following drawbacks: (i) security only against a static/non-adaptive adversary, (ii) each epoch takes many rounds of communication, or (iii) computationally expensive tools such as Proof-of-Work (PoW) or Verifiable Delay Functions (VDF). In this paper, we introduce , the first adaptively secure randomness beacon protocol that overcomes all these limitations while preserving simplicity and optimal resilience in the synchronous network setting. We achieve our result in two steps. First, we design a novel distributed key generation (DKG) protocol that runs in bits of communication but, unlike most conventional DKG protocols, outputs both secret and public keys as group elements. Here, denotes the security parameter. Second, following termination of , parties can use their keys to derive a sequence of randomness beacon values, where each random value costs only a single asynchronous round and bits of communication. We implement and evaluate it using a network of up to 64 parties running in geographically distributed AWS instances. Our evaluation shows that can produce about 2 beacon outputs per second in a network of 64 parties. We compare our protocol to the state-of-the-art randomness beacon protocols in the same setting and observe that it vastly outperforms them