11,299 research outputs found
Secure Multiparty Computation with Partial Fairness
A protocol for computing a functionality is secure if an adversary in this
protocol cannot cause more harm than in an ideal computation where parties give
their inputs to a trusted party which returns the output of the functionality
to all parties. In particular, in the ideal model such computation is fair --
all parties get the output. Cleve (STOC 1986) proved that, in general, fairness
is not possible without an honest majority. To overcome this impossibility,
Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition -- 1/p-secure
computation -- which guarantees partial fairness. For two parties, they
construct 1/p-secure protocols for functionalities for which the size of either
their domain or their range is polynomial (in the security parameter). Gordon
and Katz ask whether their results can be extended to multiparty protocols.
We study 1/p-secure protocols in the multiparty setting for general
functionalities. Our main result is constructions of 1/p-secure protocols when
the number of parties is constant provided that less than 2/3 of the parties
are corrupt. Our protocols require that either (1) the functionality is
deterministic and the size of the domain is polynomial (in the security
parameter), or (2) the functionality can be randomized and the size of the
range is polynomial. If the size of the domain is constant and the
functionality is deterministic, then our protocol is efficient even when the
number of parties is O(log log n) (where n is the security parameter). On the
negative side, we show that when the number of parties is super-constant,
1/p-secure protocols are not possible when the size of the domain is
polynomial
How to Incentivize Data-Driven Collaboration Among Competing Parties
The availability of vast amounts of data is changing how we can make medical
discoveries, predict global market trends, save energy, and develop educational
strategies. In some settings such as Genome Wide Association Studies or deep
learning, sheer size of data seems critical. When data is held distributedly by
many parties, they must share it to reap its full benefits.
One obstacle to this revolution is the lack of willingness of different
parties to share data, due to reasons such as loss of privacy or competitive
edge. Cryptographic works address privacy aspects, but shed no light on
individual parties' losses/gains when access to data carries tangible rewards.
Even if it is clear that better overall conclusions can be drawn from
collaboration, are individual collaborators better off by collaborating?
Addressing this question is the topic of this paper.
* We formalize a model of n-party collaboration for computing functions over
private inputs in which participants receive their outputs in sequence, and the
order depends on their private inputs. Each output "improves" on preceding
outputs according to a score function.
* We say a mechanism for collaboration achieves collaborative equilibrium if
it ensures higher reward for all participants when collaborating (rather than
working alone). We show that in general, computing a collaborative equilibrium
is NP-complete, yet we design efficient algorithms to compute it in a range of
natural model settings.
Our collaboration mechanisms are in the standard model, and thus require a
central trusted party; however, we show this assumption is unnecessary under
standard cryptographic assumptions. We show how to implement the mechanisms in
a decentralized way with new extensions of secure multiparty computation that
impose order/timing constraints on output delivery to different players, as
well as privacy and correctness
A granular approach to source trustworthiness for negative trust assessment
The problem of determining what information to trust is crucial in many contexts that admit uncertainty and polarization. In this paper, we propose a method to systematically reason on the trustworthiness of sources. While not aiming at establishing their veracity, the metho
- …