419 research outputs found
Probabilistic Polynomials and Hamming Nearest Neighbors
We show how to compute any symmetric Boolean function on variables over
any field (as well as the integers) with a probabilistic polynomial of degree
and error at most . The degree
dependence on and is optimal, matching a lower bound of Razborov
(1987) and Smolensky (1987) for the MAJORITY function. The proof is
constructive: a low-degree polynomial can be efficiently sampled from the
distribution.
This polynomial construction is combined with other algebraic ideas to give
the first subquadratic time algorithm for computing a (worst-case) batch of
Hamming distances in superlogarithmic dimensions, exactly. To illustrate, let
. Suppose we are given a database
of vectors in and a collection of query vectors
in the same dimension. For all , we wish to compute a
with minimum Hamming distance from . We solve this problem in randomized time. Hence, the problem is in "truly subquadratic"
time for dimensions, and in subquadratic time for . We apply the algorithm to computing pairs with maximum
inner product, closest pair in for vectors with bounded integer
entries, and pairs with maximum Jaccard coefficients.Comment: 16 pages. To appear in 56th Annual IEEE Symposium on Foundations of
Computer Science (FOCS 2015
Convolutional entanglement distillation
We develop a theory of entanglement distillation that exploits a convolutional coding structure. We provide a method for converting an arbitrary classical binary or quaternary convolutional code into a convolutional entanglement distillation protocol. The yield and error-correcting properties of such a protocol depend respectively on the rate and error-correcting properties of the imported classical convolutional code. In a con-volutional entanglement distillation protocol, two parties sharing noisy ebits can distill noiseless ebits online as they acquire more noisy ebits and this online protocol reduces decoding complexity. © 2010 IEEE
Analysis of BCNS and Newhope Key-exchange Protocols
Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. Following increasing interest from both companies and government agencies in building quantum computers, a number of works have proposed instantiations of practical post-quantum key-exchange protocols based on hard problems in lattices, mainly based on the Ring Learning With Errors (R-LWE) problem.
In this work we present an analysis of Ring-LWE based key-exchange mechanisms and compare two implementations of Ring-LWE based key-exchange protocol: BCNS and NewHope. This is important as NewHope protocol implementation outperforms state-of-the art elliptic curve based Diffie-Hellman key-exchange X25519, thus showing that using quantum safe key-exchange is not only a viable option but also a faster one. Specifically, this thesis compares different reconciliation methods, parameter choices, noise sampling algorithms and performance
Unconditionally Secure Multiparty Computation for Symmetric Functions with Low Bottleneck Complexity
Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) introduced by Boyle et al. (ICALP 2018) to achieve load-balancing. Roughly speaking, it is defined as the maximum communication complexity required by any player within the protocol execution. Since it is impossible to achieve sublinear bottleneck complexity in the number of players for all functions, a prior work constructed MPC protocols with low bottleneck complexity for specific functions including the AND function and general symmetric functions. However, the previous protocol for a symmetric function needs to assume a computational primitive of garbled circuits. Its unconditionally secure variant has exponentially large bottleneck complexity in the depth of an arithmetic formula computing the function, which limits the class of symmetric functions the protocol can compute with sublinear bottleneck complexity in . In this paper, we propose for the first time unconditionally secure MPC protocols computing any symmetric function with sublinear bottleneck complexity in . Our first protocol is an application of the one-time truth-table protocol by Ishai et al. (TCC 2013). We devise a novel technique to express the truth-table as an array of two or higher dimensions and obtain two other protocols with better trade-offs. We also propose an unconditionally secure protocol with lower bottleneck complexity tailored to the AND function. It avoids pseudorandom functions used by the previous protocol, preserving bottleneck complexity up to a logarithmic factor in . As an application, we construct an unconditionally secure protocol for private set intersection (PSI), which computes the intersection of players\u27 private sets. This is the first PSI protocol with sublinear bottleneck complexity in and to the best of our knowledge, there has been no such protocol even under cryptographic assumptions
Smaller ACC0 Circuits for Symmetric Functions
What is the power of constant-depth circuits with gates, that can
count modulo ? Can they efficiently compute MAJORITY and other symmetric
functions? When is a constant prime power, the answer is well understood:
Razborov and Smolensky proved in the 1980s that MAJORITY and require
super-polynomial-size circuits, where is any prime power not
dividing . However, relatively little is known about the power of
circuits for non-prime-power . For example, it is still open whether every
problem in can be computed by depth- circuits of polynomial size and
only gates.
We shed some light on the difficulty of proving lower bounds for
circuits, by giving new upper bounds. We construct circuits computing
symmetric functions with non-prime power , with size-depth tradeoffs that
beat the longstanding lower bounds for circuits for prime power .
Our size-depth tradeoff circuits have essentially optimal dependence on and
in the exponent, under a natural circuit complexity hypothesis.
For example, we show for every that every symmetric
function can be computed with depth-3 circuits of
size, for a constant depending only on
. That is, depth- circuits can compute any symmetric
function in \emph{subexponential} size. This demonstrates a significant
difference in the power of depth- circuits, compared to other models:
for certain symmetric functions, depth- circuits require
size [H{\aa}stad 1986], and depth-
circuits (for fixed prime power ) require size
[Smolensky 1987]. Even for depth-two circuits,
lower bounds were known [Barrington Straubing Th\'erien 1990].Comment: 15 pages; abstract edited to fit arXiv requirement
Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares
International audienceThreshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into shares handed out to distinct servers. In threshold signature schemes, a set of at least servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to corrupted servers; {\it i.e.}, it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical {\it fully distributed} (the private key is born distributed), non-interactive schemes -- where the servers can compute their partial signatures without communication with other servers -- with adaptive security ({\it i.e.}, the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size , where certain solutions incur storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems). To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player)
- …