69 research outputs found

    Sum of Us: Strategyproof Selection from the Selectors

    Full text link
    We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k\leq n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a k-subset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k \in {1,...,n-1}, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows

    Optimally-secure Coin-tossing against a Byzantine Adversary

    Get PDF
    In their seminal work, Ben-Or and Linial (1985) introduced the full information model for collective coin-tossing protocols involving nn processors with unbounded computational power using a common broadcast channel for all their communications. The design and analysis of coin-tossing protocols in the full information model have close connections to diverse fields like extremal graph theory, randomness extraction, cryptographic protocol design, game theory, distributed protocols, and learning theory. Several works have focused on studying the asymptotically best attacks and optimal coin-tossing protocols in various adversarial settings. While one knows the characterization of the exact or asymptotically optimal protocols in some adversarial settings, for most adversarial settings, the optimal protocol characterization remains open. For the cases where the asymptotically optimal constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood. In this work, we study nn-processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. Note that, in this setting, which processor speaks and its message distribution may depend on the messages broadcast so far. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt k=1k=1 processor. A bias-XX coin-tossing protocol outputs 1 with probability XX; 0 with probability (1X)(1-X). For a coin-tossing protocol, its insecurity is the maximum change in the output distribution (in the statistical distance) that an adversarial strategy can cause. Our objective is to identify optimal bias-XX coin-tossing protocols with minimum insecurity, for every X[0,1]X\in[0,1]. Lichtenstein, Linial, and Saks (1989) studied bias-XX coin-tossing protocols in this adversarial model under the highly restrictive constraint that each party broadcasts an independent and uniformly random bit. The underlying message space is a well-behaved product space, and X[0,1]X\in[0,1] can only be integer multiples of 1/2n1/2^n, which is a discrete problem. The case where every processor broadcasts only an independent random bit admits simplifications, for example, the collective coin-tossing protocol must be monotone. Surprisingly, for this class of coin-tossing protocols, the objective of reducing an adversary’s ability to increase the expected output is equivalent to reducing an adversary’s ability to decrease the expected output. Building on these observations, Lichtenstein, Linial, and Saks proved that the threshold coin-tossing protocols are optimal for all nn and kk. In a sequence of works, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and (independent of our work) Haitner and Karidi-Heller (2020) prove that k=\mathcal{O}\left(\sqrt n\cdot \polylog{n}\right) corruptions suffice to fix the output of any bias-X coin-tossing protocol. These results consider parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. However, optimal protocols robust to a large number of corruptions do not have any apriori relation to the optimal protocol robust to k=1k=1 corruption. Furthermore, to make an informed choice of employing a coin-tossing protocol in practice, for a fixed target tolerance of insecurity, one needs a precise characterization of the minimum insecurity achieved by these coin-tossing protocols. We rely on an inductive approach to constructing coin-tossing protocols to study a proxy potential function measuring the susceptibility of any bias-XX coin-tossing protocol to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It happens to be the case that threshold protocols minimize the potential function. We demonstrate that the insecurity of these threshold protocols is 2-approximate of the optimal protocol in our adversarial model. For any other X[0,1]X\in[0,1] that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol

    Fair Leader Election for Rational Agents in Asynchronous Rings and Networks

    Full text link
    We study a game theoretic model where a coalition of processors might collude to bias the outcome of the protocol, where we assume that the processors always prefer any legitimate outcome over a non-legitimate one. We show that the problems of Fair Leader Election and Fair Coin Toss are equivalent, and focus on Fair Leader Election. Our main focus is on a directed asynchronous ring of nn processors, where we investigate the protocol proposed by Abraham et al. \cite{abraham2013distributed} and studied in Afek et al. \cite{afek2014distributed}. We show that in general the protocol is resilient only to sub-linear size coalitions. Specifically, we show that Ω(nlogn)\Omega(\sqrt{n\log n}) randomly located processors or Ω(n3)\Omega(\sqrt[3]{n}) adversarially located processors can force any outcome. We complement this by showing that the protocol is resilient to any adversarial coalition of size O(n4)O(\sqrt[4]{n}). We propose a modification to the protocol, and show that it is resilient to every coalition of size Θ(n)\Theta(\sqrt{n}), by exhibiting both an attack and a resilience result. For every k1k \geq 1, we define a family of graphs Gk{\mathcal{G}}_{k} that can be simulated by trees where each node in the tree simulates at most kk processors. We show that for every graph in Gk{\mathcal{G}}_{k}, there is no fair leader election protocol that is resilient to coalitions of size kk. Our result generalizes a previous result of Abraham et al. \cite{abraham2013distributed} that states that for every graph, there is no fair leader election protocol which is resilient to coalitions of size n2\lceil \frac{n}{2} \rceil.Comment: 48 pages, PODC 201

    High Entropy Random Selection Protocols

    Get PDF
    We study the two party problem of randomly selecting a common string among all the strings of length n. We want the protocol to have the property that the output distribution has high Shannon entropy or high min entropy, even when one of the two parties is dishonest and deviates from the protocol. We develop protocols that achieve high, close to n, Shannon entropy and simultaneously min entropy close to n/2. In the literature the randomness guarantee is usually expressed in terms of “resilience”. The notion of Shannon entropy is not directly comparable to that of resilience, but we establish a connection between the two that allows us to compare our protocols with the existing ones. We construct an explicit protocol that yields Shannon entropy n- O(1) and has O(log ∗n) rounds, improving over the protocol of Goldreich et al. (SIAM J Comput 27: 506–544, 1998) that also achieves this entropy but needs O(n) rounds. Both these protocols need O(n2) bits of communication. Next we reduce the number of rounds and the length of communication in our protocols. We show the existence, non-explicitly, of a protocol that has 6 rounds, O(n) bits of communication and yields Shannon entropy n- O(log n) and min entropy n/ 2 - O(log n). Our protocol achieves the same Shannon entropy bound as, also non-explicit, protocol of Gradwohl et al. (in: Dwork (ed) Advances in Cryptology—CRYPTO ‘06, 409–426, Technical Report , 2006), however achieves much higher min entropy: n/ 2 - O(log n) versus O(log n). Finally we exhibit a very simple 3-round explicit “geometric” protocol with communication length O(n). We connect the security parameter of this protocol with the well studied Kakey

    Just How Fair is an Unreactive World?

    Get PDF
    Fitzi, Garay, Maurer, and Ostrovsky (J. Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality n1n - 1 is complete for realizing an arbitrary nn-party functionality with guaranteed output delivery. In this work, we show that in the presence of n1n - 1 corrupt parties, no unreactive primitive of cardinality n1n - 1 is complete for realizing an arbitrary nn-party functionality with fairness. We show more generally that for t>n2t > \frac{n}{2}, in the presence of tt malicious parties, no unreactive primitive of cardinality tt is complete for realizing an arbitrary nn-party functionality with fairness. We complement this result by noting that (t+1)(t+1)-wise fair exchange is complete for realizing an arbitrary nn-party functionality with fairness. In order to prove our results, we utilize the primitive of fair coin tossing and the notion of predictability. While this notion has been considered in some form in past works, we come up with a novel and non-trivial framework to employ it, one that readily generalizes from the setting of two parties to multiple parties, and also to the setting of unreactive functionalities

    Optimal Impartial Selection

    Get PDF
    This is the final version of the article. It first appeared from Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/140995775We study a fundamental problem in social choice theory, the selection of a member of a set of agents based on impartial nominations by agents from that set. Studied previously by Alon et al. [Proceedings of TARK, 2011, pp. 101--110] and by Holzman and Moulin [Econometrica, 81 (2013), pp. 173--196], this problem arises when representatives are selected from within a group or when publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. This is best possible subject to impartiality and resolves a conjecture of Alon et al. Further results are given for the case where some agent receives many nominations and the case where each agent casts at least one nomination

    Statistical Adjudication: Rights, Justice, and Utility in a World of Process Scarcity

    Get PDF
    The institution of adjudication is in a state of great upheaval to- day. Mounting case backlogs and the litigation challenge posed by mass torts are pressuring Congress and courts to experiment with novel adjudication techniques. Some of the results are well-known-case tracking, alternative dispute resolution, greater reliance on settlement, and tighter pretrial screening of cases. Taken together, these changes fore- shadow a major transformation in the practice and theory of adjudication. This Article focuses on one particularly remarkable proposal for handling large-scale litigation: adjudication by sampling. This approach uses statistical methods to adjudicate a large population of similarly situated cases. Rather than decide each individual case separately, the court aggregates all the cases and selects a random sample. The court then adjudicates each sample case and statistically combines the sample outcomes to yield results for all cases in the larger population. The sampling procedure is nicely illustrated by the most recent chapter in Judge Robert Parker\u27s struggle with asbestos litigation, Cimino v. Raymark Industries, Inc. After certifying a class action and adjudicating liability, Judge Parker faced the daunting prospect of 2298 hotly contested damage trials. Settlement negotiations had broken down, and defendants made credible threats to contest each case vigorously.\u27 Judge Parker worried about the consequences in Cimino as well as in the thousands of pending and future cases that would have to be tried individually at the damages stage unless some aggregative procedure could be devised
    corecore