5,281 research outputs found
Secure Multiparty Computation with Partial Fairness
A protocol for computing a functionality is secure if an adversary in this
protocol cannot cause more harm than in an ideal computation where parties give
their inputs to a trusted party which returns the output of the functionality
to all parties. In particular, in the ideal model such computation is fair --
all parties get the output. Cleve (STOC 1986) proved that, in general, fairness
is not possible without an honest majority. To overcome this impossibility,
Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition -- 1/p-secure
computation -- which guarantees partial fairness. For two parties, they
construct 1/p-secure protocols for functionalities for which the size of either
their domain or their range is polynomial (in the security parameter). Gordon
and Katz ask whether their results can be extended to multiparty protocols.
We study 1/p-secure protocols in the multiparty setting for general
functionalities. Our main result is constructions of 1/p-secure protocols when
the number of parties is constant provided that less than 2/3 of the parties
are corrupt. Our protocols require that either (1) the functionality is
deterministic and the size of the domain is polynomial (in the security
parameter), or (2) the functionality can be randomized and the size of the
range is polynomial. If the size of the domain is constant and the
functionality is deterministic, then our protocol is efficient even when the
number of parties is O(log log n) (where n is the security parameter). On the
negative side, we show that when the number of parties is super-constant,
1/p-secure protocols are not possible when the size of the domain is
polynomial
How to Incentivize Data-Driven Collaboration Among Competing Parties
The availability of vast amounts of data is changing how we can make medical
discoveries, predict global market trends, save energy, and develop educational
strategies. In some settings such as Genome Wide Association Studies or deep
learning, sheer size of data seems critical. When data is held distributedly by
many parties, they must share it to reap its full benefits.
One obstacle to this revolution is the lack of willingness of different
parties to share data, due to reasons such as loss of privacy or competitive
edge. Cryptographic works address privacy aspects, but shed no light on
individual parties' losses/gains when access to data carries tangible rewards.
Even if it is clear that better overall conclusions can be drawn from
collaboration, are individual collaborators better off by collaborating?
Addressing this question is the topic of this paper.
* We formalize a model of n-party collaboration for computing functions over
private inputs in which participants receive their outputs in sequence, and the
order depends on their private inputs. Each output "improves" on preceding
outputs according to a score function.
* We say a mechanism for collaboration achieves collaborative equilibrium if
it ensures higher reward for all participants when collaborating (rather than
working alone). We show that in general, computing a collaborative equilibrium
is NP-complete, yet we design efficient algorithms to compute it in a range of
natural model settings.
Our collaboration mechanisms are in the standard model, and thus require a
central trusted party; however, we show this assumption is unnecessary under
standard cryptographic assumptions. We show how to implement the mechanisms in
a decentralized way with new extensions of secure multiparty computation that
impose order/timing constraints on output delivery to different players, as
well as privacy and correctness
An Overview of Fairness Notions in Multi-Party Computation
Die sichere Mehrparteienberechnung (``Multi-party Computation\u27\u27, MPC) ist eine kryptografische Technik, die es mehreren Parteien, die sich gegenseitig misstrauen, ermöglicht, gemeinsam eine Funktion über ihre privaten Eingaben zu berechnen. Fairness in MPC ist definiert als die Eigenschaft, dass, wenn eine Partei die Ausgabe erhält, alle ehrlichen Parteien diese erhalten. Diese Arbeit befasst sich mit dem Defizit an umfassenden Übersichten über verschiedene Fairnessbegriffe in MPC.
Vollständige Fairness (``complete fairness\u27\u27), die oft als Ideal angesehen wird, garantiert, dass entweder alle ehrlichen Parteien ein Ergebnis erhalten oder keine. Dieses Ideal ist jedoch aufgrund theoretischer und kontextbezogener Beschränkungen im Allgemeinen nicht zu erreichen. Infolgedessen haben sich alternative Begriffe herausgebildet, um diese Einschränkungen zu überwinden.
In dieser Arbeit werden verschiedene Fairnessbegriffe in MPC untersucht, darunter vollständige Fairness, partielle Fairness (``Partial Fairness\u27\u27), Delta-Fairness, graduelle Freigabe, Fairness mit Strafen und probabilistische Fairness. Jedes Konzept stellt unterschiedliche Anforderungen und Einschränkungen für reale Szenarien dar. Wir stellen fest, dass vollständige Fairness eine ehrliche Mehrheit erfordert, um für allgemeine Funktionen ohne stärkere Annahmen, wie z. B. den Zugang zu öffentlichen Ledgern, erreicht zu werden, während bestimmte Funktionen auch ohne diese Annahmen mit vollständiger Fairness berechnet werden können. Andere Begriffe, wie Delta-Fairness, erfordern sichere Hardwarekomponenten. Wir geben einen Überblick über die Begriffe, ihre Zusammenhänge, Kompromisse und praktischen Implikationen dieser Begriffe. Darüber hinaus fassen wir die Ergebnisse in einer vergleichenden Tabelle zusammen, die einen kompakten Überblick über die Protokolle bietet, die diese Fairnessbegriffe erfüllen, und die Kompromisse zwischen Sicherheit, Effizienz und Anwendbarkeit aufzeigt.
In der Arbeit werden Annahmen und Einschränkungen im Zusammenhang mit verschiedenen Fairnessbegriffe aufgezeigt und Protokolle aus grundlegenden Arbeiten auf diesem Gebiet zitiert. Es werden auch mehrere Unmöglichkeitsergebnisse vorgestellt, die die inhärenten Herausforderungen beim Erreichen von Fairness im MPC aufzeigen. Die praktischen Implikationen dieser Fairnesskonzepte werden untersucht und geben Einblicke in ihre Anwendbarkeit und Grenzen in realen Szenarien
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
Performing machine learning (ML) computation on private data while
maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an
emergent field of research. Recently, PPML has seen a visible shift towards the
adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy
computation that it entails. In the SOC paradigm, computation is outsourced to
a set of powerful and specially equipped servers that provide service on a
pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for
a range of ML algorithms in SOC setting, that guarantees output delivery to the
users irrespective of any adversarial behaviour. Robustness, a highly desirable
feature, evokes user participation without the fear of denial of service.
At the heart of our framework lies a highly-efficient, maliciously-secure,
three-party computation (3PC) over rings that provides guaranteed output
delivery (GOD) in the honest-majority setting. To the best of our knowledge,
SWIFT is the first robust and efficient PPML framework in the 3PC setting.
SWIFT is as fast as (and is strictly better in some cases than) the best-known
3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We
extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as
fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20)
and twice faster than the best-known robust 4PC framework FLASH (Byali et al.
PETS'20).
We demonstrate our framework's practical relevance by benchmarking popular ML
algorithms such as Logistic Regression and deep Neural Networks such as VGG16
and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results
testify to our claims that we provide improved security guarantee while
incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear
in USENIX Security 202
Conclave: secure multi-party computation on big data (extended TR)
Secure Multi-Party Computation (MPC) allows mutually distrusting parties to
run joint computations without revealing private data. Current MPC algorithms
scale poorly with data size, which makes MPC on "big data" prohibitively slow
and inhibits its practical use.
Many relational analytics queries can maintain MPC's end-to-end security
guarantee without using cryptographic MPC techniques for all operations.
Conclave is a query compiler that accelerates such queries by transforming them
into a combination of data-parallel, local cleartext processing and small MPC
steps. When parties trust others with specific subsets of the data, Conclave
applies new hybrid MPC-cleartext protocols to run additional steps outside of
MPC and improve scalability further.
Our Conclave prototype generates code for cleartext processing in Python and
Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave
scales to data sets between three and six orders of magnitude larger than
state-of-the-art MPC frameworks support on their own. Thanks to its hybrid
protocols, Conclave also substantially outperforms SMCQL, the most similar
existing system.Comment: Extended technical report for EuroSys 2019 pape
On Fairness in Secure Computation
Secure computation is a fundamental problem in modern cryptography in which multiple parties join to compute a function of their private inputs without revealing anything beyond the output of the function. A series of very strong results in the 1980's demonstrated that any polynomial-time function can be computed while guaranteeing essentially every desired security property. The only exception is the fairness property, which states that no player should receive their output from the computation unless all players receive their output. While it was shown that fairness can be achieved whenever a majority of players are honest, it was also shown that fairness is impossible to achieve in general when half or more of the players are dishonest. Indeed, it was proven that even boolean XOR cannot be computed fairly by two parties
The fairness property is both natural and important, and as such it was one of the first questions addressed in modern cryptography (in the context of signature exchange). One contribution of this thesis is to survey the many approaches that have been used to guarantee different notions of partial fairness. We then revisit the topic of fairness within a modern security framework for secure computation. We demonstrate that, despite the strong impossibility result mentioned above, certain interesting functions can be computed fairly, even when half (or more) of the parties are malicious. We also provide a new notion of partial fairness, demonstrate feasibility of achieving this notion for a large class of functions, and show impossibility for certain functions outside this class. We consider fairness in the presence of rational adversaries, and, finally, we further study the difficulty of achieving fairness by exploring how much external help is necessary for enabling fair secure computation
- …