12,413 research outputs found
Reducing Complexity Assumptions for Oblivious Transfer
Reducing the minimum assumptions needed to construct various cryptographic primitives is an important and
interesting task in theoretical cryptography. Oblivious Transfer, one of the most basic cryptographic building
blocks, is also studied under this scenario. Reducing the minimum assumptions for Oblivious Transfer seems not
an easy task, as there are a few impossibility results under black-box reductions.
Until recently, it is widely believed that Oblivious Transfer can be constructed with trapdoor permutations but
not trapdoor functions in general. In this paper, we enhance previous results and show one
Oblivious Transfer protocol based on a collection of trapdoor functions with some extra properties.
We also provide reasons for adding the extra properties and argue that the assumptions in the protocol
are nearly minimum
Towards Communication-Efficient Quantum Oblivious Key Distribution
Oblivious Transfer, a fundamental problem in the field of secure multi-party
computation is defined as follows: A database DB of N bits held by Bob is
queried by a user Alice who is interested in the bit DB_b in such a way that
(1) Alice learns DB_b and only DB_b and (2) Bob does not learn anything about
Alice's choice b. While solutions to this problem in the classical domain rely
largely on unproven computational complexity theoretic assumptions, it is also
known that perfect solutions that guarantee both database and user privacy are
impossible in the quantum domain. Jakobi et al. [Phys. Rev. A, 83(2), 022301,
Feb 2011] proposed a protocol for Oblivious Transfer using well known QKD
techniques to establish an Oblivious Key to solve this problem. Their solution
provided a good degree of database and user privacy (using physical principles
like impossibility of perfectly distinguishing non-orthogonal quantum states
and the impossibility of superluminal communication) while being loss-resistant
and implementable with commercial QKD devices (due to the use of SARG04).
However, their Quantum Oblivious Key Distribution (QOKD) protocol requires a
communication complexity of O(N log N). Since modern databases can be extremely
large, it is important to reduce this communication as much as possible. In
this paper, we first suggest a modification of their protocol wherein the
number of qubits that need to be exchanged is reduced to O(N). A subsequent
generalization reduces the quantum communication complexity even further in
such a way that only a few hundred qubits are needed to be transferred even for
very large databases.Comment: 7 page
Separating Two-Round Secure Computation From Oblivious Transfer
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT
Defeating classical bit commitments with a quantum computer
It has been recently shown by Mayers that no bit commitment scheme is secure
if the participants have unlimited computational power and technology. However
it was noticed that a secure protocol could be obtained by forcing the cheater
to perform a measurement. Similar situations had been encountered previously in
the design of Quantum Oblivious Transfer. The question is whether a classical
bit commitment could be used for this specific purpose. We demonstrate that,
surprisingly, classical unconditionally concealing bit commitments do not help.Comment: 13 pages. Supersedes quant-ph/971202
When private set intersection meets big data : an efficient and scalable protocol
Large scale data processing brings new challenges to the design of privacy-preserving protocols: how to meet the increasing requirements of speed and throughput of modern applications, and how to scale up smoothly when data being protected is big. Efficiency and scalability become critical criteria for privacy preserving protocols in the age of Big Data. In this paper, we present a new Private Set Intersection (PSI) protocol that is extremely efficient and highly scalable compared with existing protocols. The protocol is based on a novel approach that we call oblivious Bloom intersection. It has linear complexity and relies mostly on efficient symmetric key operations. It has high scalability due to the fact that most operations can be parallelized easily. The protocol has two versions: a basic protocol and an enhanced protocol, the security of the two variants is analyzed and proved in the semi-honest model and the malicious model respectively. A prototype of the basic protocol has been built. We report the result of performance evaluation and compare it against the two previously fastest PSI protocols. Our protocol is orders of magnitude faster than these two protocols. To compute the intersection of two million-element sets, our protocol needs only 41 seconds (80-bit security) and 339 seconds (256-bit security) on moderate hardware in parallel mode
Why Quantum Bit Commitment And Ideal Quantum Coin Tossing Are Impossible
There had been well known claims of unconditionally secure quantum protocols
for bit commitment. However, we, and independently Mayers, showed that all
proposed quantum bit commitment schemes are, in principle, insecure because the
sender, Alice, can almost always cheat successfully by using an
Einstein-Podolsky-Rosen (EPR) type of attack and delaying her measurements. One
might wonder if secure quantum bit commitment protocols exist at all. We answer
this question by showing that the same type of attack by Alice will, in
principle, break any bit commitment scheme. The cheating strategy generally
requires a quantum computer. We emphasize the generality of this ``no-go
theorem'': Unconditionally secure bit commitment schemes based on quantum
mechanics---fully quantum, classical or quantum but with measurements---are all
ruled out by this result. Since bit commitment is a useful primitive for
building up more sophisticated protocols such as zero-knowledge proofs, our
results cast very serious doubt on the security of quantum cryptography in the
so-called ``post-cold-war'' applications. We also show that ideal quantum coin
tossing is impossible because of the EPR attack. This no-go theorem for ideal
quantum coin tossing may help to shed some lights on the possibility of
non-ideal protocols.Comment: We emphasize the generality of this "no-go theorem". All bit
commitment schemes---fully quantum, classical and quantum but with
measurements---are shown to be necessarily insecure. Accepted for publication
in a special issue of Physica D. About 18 pages in elsart.sty. This is an
extended version of an earlier manuscript (quant-ph/9605026) which has
appeared in the proceedings of PHYSCOMP'9
Converses for Secret Key Agreement and Secure Computing
We consider information theoretic secret key agreement and secure function
computation by multiple parties observing correlated data, with access to an
interactive public communication channel. Our main result is an upper bound on
the secret key length, which is derived using a reduction of binary hypothesis
testing to multiparty secret key agreement. Building on this basic result, we
derive new converses for multiparty secret key agreement. Furthermore, we
derive converse results for the oblivious transfer problem and the bit
commitment problem by relating them to secret key agreement. Finally, we derive
a necessary condition for the feasibility of secure computation by trusted
parties that seek to compute a function of their collective data, using an
interactive public communication that by itself does not give away the value of
the function. In many cases, we strengthen and improve upon previously known
converse bounds. Our results are single-shot and use only the given joint
distribution of the correlated observations. For the case when the correlated
observations consist of independent and identically distributed (in time)
sequences, we derive strong versions of previously known converses
ARM2GC: Succinct Garbled Processor for Secure Computation
We present ARM2GC, a novel secure computation framework based on Yao's
Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop
privacy-preserving applications using standard high-level programming languages
(e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm).
The main enabler of this framework is the introduction of SkipGate, an
algorithm that dynamically omits the communication and encryption cost of the
gates whose outputs are independent of the private data. SkipGate greatly
enhances the performance of ARM2GC by omitting costs of the gates associated
with the instructions of the compiled binary, which is known by both parties
involved in the computation. Our evaluation on benchmark functions demonstrates
that ARM2GC not only outperforms the current GC frameworks that support
high-level languages, it also achieves efficiency comparable to the best prior
solutions based on hardware description languages. Moreover, in contrast to
previous high-level frameworks with domain-specific languages and customized
compilers, ARM2GC relies on standard ARM compiler which is rigorously verified
and supports programs written in the standard syntax.Comment: 13 page
- …