1,484 research outputs found
Scalable low latency consensus for blockchains
Tese de mestrado, Segurança Informática, Universidade de Lisboa; Faculdade de Ciências, 2021State machine replication (SMR) is a classical technique to implement consistent and faultÂtolerant
replicated services. This type of system is usually built on top of consensus protocols that have high
throughput but have problems scaling to settings with a large number of participants or wideÂarea sce narios due to the required number of messages exchanged to reach a consensus.
We propose ProBFT (Probabilistic Byzantine Fault Tolerance), a consensus protocol specifically de signed to tackle the scalability problem of BFT protocols. ProBFT is a consensus protocol with optimal
latency (three communication steps, as in PBFT) but with a reduced number of messages exchanged
in each phase (O(n
√
n) instead of PBFT’s O(n
2
)). ProBFT is a probabilistic protocol built on top of
wellÂknown primitives, such as probabilistic Byzantine quorums and verifiable random functions, which
provides high probabilities of safety and liveness when the overwhelming majority of replicas is correct.
We also propose a state machine replication protocol called PROBER (PRObabilistic ByzantinE
Replication) that builds on top of two consensus protocols, ProBFT and PBFT. PROBER makes use
of ProBFT to provide fast and probabilistic replies to the clients and uses PBFT to eventually determinis tically commit the history of operations guaranteeing that the system will not roll back the requests after
such commit. This periodic deterministic commit allows the clients to enjoy the low latency provided by
ProBFT while still having the guarantees provided by a deterministic protocol.
We provide a detailed description of both protocols and analyse the probabilities for safety and live ness depending on the current number of Byzantine replicas
On the Round Complexity of Randomized Byzantine Agreement
We prove lower bounds on the round complexity of randomized Byzantine agreement (BA) protocols, bounding the halting probability of such protocols after one and two rounds. In particular, we prove that:
1) BA protocols resilient against n/3 [resp., n/4] corruptions terminate (under attack) at the end of the first round with probability at most o(1) [resp., 1/2+ o(1)].
2) BA protocols resilient against n/4 corruptions terminate at the end of the second round with probability at most 1-Theta(1).
3) For a large class of protocols (including all BA protocols used in practice) and under a plausible combinatorial conjecture, BA protocols resilient against n/3 [resp., n/4] corruptions terminate at the end of the second round with probability at most o(1) [resp., 1/2 + o(1)].
The above bounds hold even when the parties use a trusted setup phase, e.g., a public-key infrastructure (PKI).
The third bound essentially matches the recent protocol of Micali (ITCS\u2717) that tolerates up to n/3 corruptions and terminates at the end of the third round with constant probability
Randomized protocols for asynchronous consensus
The famous Fischer, Lynch, and Paterson impossibility proof shows that it is
impossible to solve the consensus problem in a natural model of an asynchronous
distributed system if even a single process can fail. Since its publication,
two decades of work on fault-tolerant asynchronous consensus algorithms have
evaded this impossibility result by using extended models that provide (a)
randomization, (b) additional timing assumptions, (c) failure detectors, or (d)
stronger synchronization mechanisms than are available in the basic model.
Concentrating on the first of these approaches, we illustrate the history and
structure of randomized asynchronous consensus protocols by giving detailed
descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of
Distributed Computin
On the mean square error of randomized averaging algorithms
This paper regards randomized discrete-time consensus systems that preserve
the average "on average". As a main result, we provide an upper bound on the
mean square deviation of the consensus value from the initial average. Then, we
apply our result to systems where few or weakly correlated interactions take
place: these assumptions cover several algorithms proposed in the literature.
For such systems we show that, when the network size grows, the deviation tends
to zero, and the speed of this decay is not slower than the inverse of the
size. Our results are based on a new approach, which is unrelated to the
convergence properties of the system.Comment: 11 pages. to appear as a journal publicatio
Novel Multidimensional Models of Opinion Dynamics in Social Networks
Unlike many complex networks studied in the literature, social networks
rarely exhibit unanimous behavior, or consensus. This requires a development of
mathematical models that are sufficiently simple to be examined and capture, at
the same time, the complex behavior of real social groups, where opinions and
actions related to them may form clusters of different size. One such model,
proposed by Friedkin and Johnsen, extends the idea of conventional consensus
algorithm (also referred to as the iterative opinion pooling) to take into
account the actors' prejudices, caused by some exogenous factors and leading to
disagreement in the final opinions.
In this paper, we offer a novel multidimensional extension, describing the
evolution of the agents' opinions on several topics. Unlike the existing
models, these topics are interdependent, and hence the opinions being formed on
these topics are also mutually dependent. We rigorous examine stability
properties of the proposed model, in particular, convergence of the agents'
opinions. Although our model assumes synchronous communication among the
agents, we show that the same final opinions may be reached "on average" via
asynchronous gossip-based protocols.Comment: Accepted by IEEE Transaction on Automatic Control (to be published in
May 2017
Optimal and Player-Replaceable Consensus with an Honest Majority
We construct a Byzantine Agreement protocol that tolerates t < n/2 corruptions, is very efficient in terms of the number of rounds and the number of bits of communication, and satisfies a strong notion of robustness called player replaceability (defined in [Mic16]). We provide an analysis of our protocol when executed on real-world networks such as the ones employed in the bitcoin protocol
- …