69 research outputs found
Recommended from our members
A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols
In 1985, Ben-Or and Linial (Advances in Computing Research \u2789) introduced the collective coin-flipping problem, where n parties communicate via a single broadcast channel and wish to generate a common random bit in the presence of adaptive Byzantine corruptions. In this model, the adversary can decide to corrupt a party in the course of the protocol as a function of the messages seen so far. They showed that the majority protocol, in which each player sends a random bit and the output is the majority value, tolerates O(sqrt n) adaptive corruptions. They conjectured that this is optimal for such adversaries.
We prove that the majority protocol is optimal (up to a poly-logarithmic factor) among all protocols in which each party sends a single, possibly long, message.
Previously, such a lower bound was known for protocols in which parties are allowed to send only a single bit (Lichtenstein, Linial, and Saks, Combinatorica \u2789), or for symmetric protocols (Goldwasser, Kalai, and Park, ICALP \u2715)
Sum of Us: Strategyproof Selection from the Selectors
We consider directed graphs over a set of n agents, where an edge (i,j) is
taken to mean that agent i supports or trusts agent j. Given such a graph and
an integer k\leq n, we wish to select a subset of k agents that maximizes the
sum of indegrees, i.e., a subset of k most popular or most trusted agents. At
the same time we assume that each individual agent is only interested in being
selected, and may misreport its outgoing edges to this end. This problem
formulation captures realistic scenarios where agents choose among themselves,
which can be found in the context of Internet search, social networks like
Twitter, or reputation systems like Epinions.
Our goal is to design mechanisms without payments that map each graph to a
k-subset of agents to be selected and satisfy the following two constraints:
strategyproofness, i.e., agents cannot benefit from misreporting their outgoing
edges, and approximate optimality, i.e., the sum of indegrees of the selected
subset of agents is always close to optimal. Our first main result is a
surprising impossibility: for k \in {1,...,n-1}, no deterministic strategyproof
mechanism can provide a finite approximation ratio. Our second main result is a
randomized strategyproof mechanism with an approximation ratio that is bounded
from above by four for any value of k, and approaches one as k grows
Optimally-secure Coin-tossing against a Byzantine Adversary
In their seminal work, Ben-Or and Linial (1985) introduced the full information model for collective coin-tossing protocols involving processors with unbounded computational power using a common broadcast channel for all their communications. The design and analysis of coin-tossing protocols in the full information model have close connections to diverse fields like extremal graph theory, randomness extraction, cryptographic protocol design, game theory, distributed protocols, and learning theory. Several works have focused on studying the asymptotically best attacks and optimal coin-tossing protocols in various adversarial settings. While one knows the characterization of the exact or asymptotically optimal protocols in some adversarial settings, for most adversarial settings, the optimal protocol characterization remains open. For the cases where the asymptotically optimal constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood.
In this work, we study -processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. Note that, in this setting, which processor speaks and its message distribution may depend on the messages broadcast so far. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt processor. A bias- coin-tossing protocol outputs 1 with probability ; 0 with probability . For a coin-tossing protocol, its insecurity is the maximum change in the output distribution (in the statistical distance) that an adversarial strategy can cause. Our objective is to identify optimal bias- coin-tossing protocols with minimum insecurity, for every .
Lichtenstein, Linial, and Saks (1989) studied bias- coin-tossing protocols in this adversarial model under the highly restrictive constraint that each party broadcasts an independent and uniformly random bit. The underlying message space is a well-behaved product space, and can only be integer multiples of , which is a discrete problem. The case where every processor broadcasts only an independent random bit admits simplifications, for example, the collective coin-tossing protocol must be monotone. Surprisingly, for this class of coin-tossing protocols, the objective of reducing an adversary’s ability to increase the expected output is equivalent to reducing an adversary’s ability to decrease the expected output. Building on these observations, Lichtenstein, Linial, and Saks proved that the threshold coin-tossing protocols are optimal for all and .
In a sequence of works, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and (independent of our work) Haitner and Karidi-Heller (2020) prove that k=\mathcal{O}\left(\sqrt n\cdot \polylog{n}\right) corruptions suffice to fix the output of any bias-X coin-tossing protocol. These results consider parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. However, optimal protocols robust to a large number of corruptions do not have any apriori relation to the optimal protocol robust to corruption. Furthermore, to make an informed choice of employing a coin-tossing protocol in practice, for a fixed target tolerance of insecurity, one needs a precise characterization of the minimum insecurity achieved by these coin-tossing protocols.
We rely on an inductive approach to constructing coin-tossing protocols to study a proxy potential function measuring the susceptibility of any bias- coin-tossing protocol to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It happens to be the case that threshold protocols minimize the potential function. We demonstrate that the insecurity of these threshold protocols is 2-approximate of the optimal protocol in our adversarial model. For any other that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol
Fair Leader Election for Rational Agents in Asynchronous Rings and Networks
We study a game theoretic model where a coalition of processors might collude
to bias the outcome of the protocol, where we assume that the processors always
prefer any legitimate outcome over a non-legitimate one. We show that the
problems of Fair Leader Election and Fair Coin Toss are equivalent, and focus
on Fair Leader Election.
Our main focus is on a directed asynchronous ring of processors, where we
investigate the protocol proposed by Abraham et al.
\cite{abraham2013distributed} and studied in Afek et al.
\cite{afek2014distributed}. We show that in general the protocol is resilient
only to sub-linear size coalitions. Specifically, we show that
randomly located processors or
adversarially located processors can force any outcome. We complement this by
showing that the protocol is resilient to any adversarial coalition of size
.
We propose a modification to the protocol, and show that it is resilient to
every coalition of size , by exhibiting both an attack and a
resilience result. For every , we define a family of graphs
that can be simulated by trees where each node in the tree
simulates at most processors. We show that for every graph in
, there is no fair leader election protocol that is
resilient to coalitions of size . Our result generalizes a previous result
of Abraham et al. \cite{abraham2013distributed} that states that for every
graph, there is no fair leader election protocol which is resilient to
coalitions of size .Comment: 48 pages, PODC 201
High Entropy Random Selection Protocols
We study the two party problem of randomly selecting a common string among all the strings of length n. We want the protocol to have the property that the output distribution has high Shannon entropy or high min entropy, even when one of the two parties is dishonest and deviates from the protocol. We develop protocols that achieve high, close to n, Shannon entropy and simultaneously min entropy close to n/2. In the literature the randomness guarantee is usually expressed in terms of “resilience”. The notion of Shannon entropy is not directly comparable to that of resilience, but we establish a connection between the two that allows us to compare our protocols with the existing ones. We construct an explicit protocol that yields Shannon entropy n- O(1) and has O(log ∗n) rounds, improving over the protocol of Goldreich et al. (SIAM J Comput 27: 506–544, 1998) that also achieves this entropy but needs O(n) rounds. Both these protocols need O(n2) bits of communication. Next we reduce the number of rounds and the length of communication in our protocols. We show the existence, non-explicitly, of a protocol that has 6 rounds, O(n) bits of communication and yields Shannon entropy n- O(log n) and min entropy n/ 2 - O(log n). Our protocol achieves the same Shannon entropy bound as, also non-explicit, protocol of Gradwohl et al. (in: Dwork (ed) Advances in Cryptology—CRYPTO ‘06, 409–426, Technical Report , 2006), however achieves much higher min entropy: n/ 2 - O(log n) versus O(log n). Finally we exhibit a very simple 3-round explicit “geometric” protocol with communication length O(n). We connect the security parameter of this protocol with the well studied Kakey
Just How Fair is an Unreactive World?
Fitzi, Garay, Maurer, and Ostrovsky (J. Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality is complete for realizing an arbitrary -party functionality with guaranteed output delivery. In this work, we show that in the presence of corrupt parties, no unreactive primitive of cardinality is complete for realizing an arbitrary -party functionality with fairness. We show more generally that for , in the presence of malicious parties, no unreactive primitive of cardinality is complete for realizing an arbitrary -party functionality with fairness. We complement this result by noting that -wise fair exchange is complete for realizing an arbitrary -party functionality with fairness. In order to prove our results, we utilize the primitive of fair coin tossing and the notion of predictability. While this notion has been considered in some form in past works, we come up with a novel and non-trivial framework to employ it, one that readily generalizes from the setting of two parties to multiple parties, and also to the setting of unreactive functionalities
Optimal Impartial Selection
This is the final version of the article. It first appeared from Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/140995775We study a fundamental problem in social choice theory, the selection of a member of a set of agents based on impartial nominations by agents from that set. Studied previously by Alon et al. [Proceedings of TARK, 2011, pp. 101--110] and by Holzman and Moulin [Econometrica, 81 (2013), pp. 173--196], this problem arises when representatives are selected from within a group or when publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. This is best possible subject to impartiality and resolves a conjecture of Alon et al. Further results are given for the case where some agent receives many nominations and the case where each agent casts at least one nomination
Recommended from our members
Distributed computing and cryptography with general weak random sources
The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.Computer Science
Statistical Adjudication: Rights, Justice, and Utility in a World of Process Scarcity
The institution of adjudication is in a state of great upheaval to- day. Mounting case backlogs and the litigation challenge posed by mass torts are pressuring Congress and courts to experiment with novel adjudication techniques. Some of the results are well-known-case tracking, alternative dispute resolution, greater reliance on settlement, and tighter pretrial screening of cases. Taken together, these changes fore- shadow a major transformation in the practice and theory of adjudication.
This Article focuses on one particularly remarkable proposal for handling large-scale litigation: adjudication by sampling. This approach uses statistical methods to adjudicate a large population of similarly situated cases. Rather than decide each individual case separately, the court aggregates all the cases and selects a random sample. The court then adjudicates each sample case and statistically combines the sample outcomes to yield results for all cases in the larger population. The sampling procedure is nicely illustrated by the most recent chapter in Judge Robert Parker\u27s struggle with asbestos litigation, Cimino v. Raymark Industries, Inc. After certifying a class action and adjudicating liability, Judge Parker faced the daunting prospect of 2298 hotly contested damage trials. Settlement negotiations had broken down, and defendants made credible threats to contest each case vigorously.\u27 Judge Parker worried about the consequences in Cimino as well as in the thousands of pending and future cases that would have to be tried individually at the damages stage unless some aggregative procedure could be devised
- …