419 research outputs found

    Probabilistic Polynomials and Hamming Nearest Neighbors

    Full text link
    We show how to compute any symmetric Boolean function on nn variables over any field (as well as the integers) with a probabilistic polynomial of degree O(nlog(1/ϵ))O(\sqrt{n \log(1/\epsilon)}) and error at most ϵ\epsilon. The degree dependence on nn and ϵ\epsilon is optimal, matching a lower bound of Razborov (1987) and Smolensky (1987) for the MAJORITY function. The proof is constructive: a low-degree polynomial can be efficiently sampled from the distribution. This polynomial construction is combined with other algebraic ideas to give the first subquadratic time algorithm for computing a (worst-case) batch of Hamming distances in superlogarithmic dimensions, exactly. To illustrate, let c(n):NNc(n) : \mathbb{N} \rightarrow \mathbb{N}. Suppose we are given a database DD of nn vectors in {0,1}c(n)logn\{0,1\}^{c(n) \log n} and a collection of nn query vectors QQ in the same dimension. For all uQu \in Q, we wish to compute a vDv \in D with minimum Hamming distance from uu. We solve this problem in n21/O(c(n)log2c(n))n^{2-1/O(c(n) \log^2 c(n))} randomized time. Hence, the problem is in "truly subquadratic" time for O(logn)O(\log n) dimensions, and in subquadratic time for d=o((log2n)/(loglogn)2)d = o((\log^2 n)/(\log \log n)^2). We apply the algorithm to computing pairs with maximum inner product, closest pair in 1\ell_1 for vectors with bounded integer entries, and pairs with maximum Jaccard coefficients.Comment: 16 pages. To appear in 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2015

    Convolutional entanglement distillation

    Get PDF
    We develop a theory of entanglement distillation that exploits a convolutional coding structure. We provide a method for converting an arbitrary classical binary or quaternary convolutional code into a convolutional entanglement distillation protocol. The yield and error-correcting properties of such a protocol depend respectively on the rate and error-correcting properties of the imported classical convolutional code. In a con-volutional entanglement distillation protocol, two parties sharing noisy ebits can distill noiseless ebits online as they acquire more noisy ebits and this online protocol reduces decoding complexity. © 2010 IEEE

    Analysis of BCNS and Newhope Key-exchange Protocols

    Get PDF
    Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. Following increasing interest from both companies and government agencies in building quantum computers, a number of works have proposed instantiations of practical post-quantum key-exchange protocols based on hard problems in lattices, mainly based on the Ring Learning With Errors (R-LWE) problem. In this work we present an analysis of Ring-LWE based key-exchange mechanisms and compare two implementations of Ring-LWE based key-exchange protocol: BCNS and NewHope. This is important as NewHope protocol implementation outperforms state-of-the art elliptic curve based Diffie-Hellman key-exchange X25519, thus showing that using quantum safe key-exchange is not only a viable option but also a faster one. Specifically, this thesis compares different reconciliation methods, parameter choices, noise sampling algorithms and performance

    Unconditionally Secure Multiparty Computation for Symmetric Functions with Low Bottleneck Complexity

    Get PDF
    Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) introduced by Boyle et al. (ICALP 2018) to achieve load-balancing. Roughly speaking, it is defined as the maximum communication complexity required by any player within the protocol execution. Since it is impossible to achieve sublinear bottleneck complexity in the number of players nn for all functions, a prior work constructed MPC protocols with low bottleneck complexity for specific functions including the AND function and general symmetric functions. However, the previous protocol for a symmetric function needs to assume a computational primitive of garbled circuits. Its unconditionally secure variant has exponentially large bottleneck complexity in the depth of an arithmetic formula computing the function, which limits the class of symmetric functions the protocol can compute with sublinear bottleneck complexity in nn. In this paper, we propose for the first time unconditionally secure MPC protocols computing any symmetric function with sublinear bottleneck complexity in nn. Our first protocol is an application of the one-time truth-table protocol by Ishai et al. (TCC 2013). We devise a novel technique to express the truth-table as an array of two or higher dimensions and obtain two other protocols with better trade-offs. We also propose an unconditionally secure protocol with lower bottleneck complexity tailored to the AND function. It avoids pseudorandom functions used by the previous protocol, preserving bottleneck complexity up to a logarithmic factor in nn. As an application, we construct an unconditionally secure protocol for private set intersection (PSI), which computes the intersection of players\u27 private sets. This is the first PSI protocol with sublinear bottleneck complexity in nn and to the best of our knowledge, there has been no such protocol even under cryptographic assumptions

    On Correlation Bounds Against Polynomials

    Get PDF

    Smaller ACC0 Circuits for Symmetric Functions

    Get PDF
    What is the power of constant-depth circuits with MODmMOD_m gates, that can count modulo mm? Can they efficiently compute MAJORITY and other symmetric functions? When mm is a constant prime power, the answer is well understood: Razborov and Smolensky proved in the 1980s that MAJORITY and MODmMOD_m require super-polynomial-size MODqMOD_q circuits, where qq is any prime power not dividing mm. However, relatively little is known about the power of MODmMOD_m circuits for non-prime-power mm. For example, it is still open whether every problem in EXPEXP can be computed by depth-33 circuits of polynomial size and only MOD6MOD_6 gates. We shed some light on the difficulty of proving lower bounds for MODmMOD_m circuits, by giving new upper bounds. We construct MODmMOD_m circuits computing symmetric functions with non-prime power mm, with size-depth tradeoffs that beat the longstanding lower bounds for AC0[m]AC^0[m] circuits for prime power mm. Our size-depth tradeoff circuits have essentially optimal dependence on mm and dd in the exponent, under a natural circuit complexity hypothesis. For example, we show for every ε>0\varepsilon > 0 that every symmetric function can be computed with depth-3 MODmMOD_m circuits of exp(O(nε))\exp(O(n^{\varepsilon})) size, for a constant mm depending only on ε>0\varepsilon > 0. That is, depth-33 CC0CC^0 circuits can compute any symmetric function in \emph{subexponential} size. This demonstrates a significant difference in the power of depth-33 CC0CC^0 circuits, compared to other models: for certain symmetric functions, depth-33 AC0AC^0 circuits require 2Ω(n)2^{\Omega(\sqrt{n})} size [H{\aa}stad 1986], and depth-33 AC0[pk]AC^0[p^k] circuits (for fixed prime power pkp^k) require 2Ω(n1/6)2^{\Omega(n^{1/6})} size [Smolensky 1987]. Even for depth-two MODpMODmMOD_p \circ MOD_m circuits, 2Ω(n)2^{\Omega(n)} lower bounds were known [Barrington Straubing Th\'erien 1990].Comment: 15 pages; abstract edited to fit arXiv requirement

    Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares

    Get PDF
    International audienceThreshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into nn shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1nt+1 \leq n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to tt corrupted servers; {\it i.e.}, it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical {\it fully distributed} (the private key is born distributed), non-interactive schemes -- where the servers can compute their partial signatures without communication with other servers -- with adaptive security ({\it i.e.}, the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1)O(1), where certain solutions incur O(n)O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems). To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player)
    corecore