2,925 research outputs found

    Shorter lattice-based zero-knowledge proofs for the correctness of a shuffle

    Get PDF
    In an electronic voting procedure, mixing networks are used to ensure anonymity of the casted votes. Each node of the network re-encrypts the input list of ciphertexts and randomly permutes it in a process named shuffle, and must prove (in zero-knowledge) that the process was applied honestly. To maintain security of such a process in a post-quantum scenario, new proofs are based on different mathematical assumptions, such as lattice-based problems. Nonetheless, the best lattice-based protocols to ensure verifiable shuffling have linear communication complexity on N, the number of shuffled ciphertexts. In this paper we propose the first sub-linear (on N) post-quantum zero-knowledge argument for the correctness of a shuffle, for which we have mainly used two ideas: arithmetic circuit satisfiability results from Baum et al. (CRYPTO'2018) and Beneš networks to model a permutation of N elements. The achieved communication complexity of our protocol with respect to N is O(v(N)log^2(N)), but we will also highlight its dependency on other important parameters of the underlying lattice ingredients.The work is partially supported by the Spanish Ministerio de Ciencia e Innovaci´on (MICINN), under Project PID2019-109379RB-I00 and by the European Union PROMETHEUS project (Horizon 2020 Research and Innovation Program, grant 780701). Authors thank Tjerand Silde for pointing out an incorrect set of parameters (Section 4.1) that we had proposed in a previous version of the manuscript.Postprint (author's final draft

    Arya: Nearly linear-time zero-knowledge proofs for correct program execution

    Get PDF
    There have been tremendous advances in reducing interaction, communication and verification time in zero-knowledge proofs but it remains an important challenge to make the prover efficient. We construct the first zero-knowledge proof of knowledge for the correct execution of a program on public and private inputs where the prover computation is nearly linear time. This saves a polylogarithmic factor in asymptotic performance compared to current state of the art proof systems. We use the TinyRAM model to capture general purpose processor computation. An instance consists of a TinyRAM program and public inputs. The witness consists of additional private inputs to the program. The prover can use our proof system to convince the verifier that the program terminates with the intended answer within given time and memory bounds. Our proof system has perfect completeness, statistical special honest verifier zero-knowledge, and computational knowledge soundness assuming linear-time computable collision-resistant hash functions exist. The main advantage of our new proof system is asymptotically efficient prover computation. The prover’s running time is only a superconstant factor larger than the program’s running time in an apples-to-apples comparison where the prover uses the same TinyRAM model. Our proof system is also efficient on the other performance parameters; the verifier’s running time and the communication are sublinear in the execution time of the program and we only use a log-logarithmic number of rounds

    On the Size of Pairing-Based Non-interactive Arguments

    Get PDF
    Non-interactive arguments enable a prover to convince a verifier that a statement is true. Recently there has been a lot of progress both in theory and practice on constructing highly efficient non-interactive arguments with small size and low verification complexity, so-called succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs). Many constructions of SNARGs rely on pairing-based cryptography. In these constructions a proof consists of a number of group elements and the verification consists of checking a number of pairing product equations. The question we address in this article is how efficient pairing-based SNARGs can be. Our first contribution is a pairing-based (preprocessing) SNARK for arithmetic circuit satisfiability, which is an NP-complete language. In our SNARK we work with asymmetric pairings for higher efficiency, a proof is only 3 group elements, and verification consists of checking a single pairing product equations using 3 pairings in total. Our SNARK is zero-knowledge and does not reveal anything about the witness the prover uses to make the proof. As our second contribution we answer an open question of Bitansky, Chiesa, Ishai, Ostrovsky and Paneth (TCC 2013) by showing that linear interactive proofs cannot have a linear decision procedure. It follows from this that SNARGs where the prover and verifier use generic asymmetric bilinear group operations cannot consist of a single group element. This gives the first lower bound for pairing-based SNARGs. It remains an intriguing open problem whether this lower bound can be extended to rule out 2 group element SNARGs, which would prove optimality of our 3 element construction

    Square Span Programs with Applications to Succinct NIZK Arguments

    Get PDF
    We use SSPs to construct succinct non-interactive zero-knowledge arguments of knowledge. For performance, our proof system is defined over Type III bilinear groups; proofs consist of just 4 group elements, verified in just 6 pairings. Concretely, using the Pinocchio libraries, we estimate that proofs will consist of 160 bytes verified in less than 6 ms

    Sparse multivariate polynomial interpolation in the basis of Schubert polynomials

    Full text link
    Schubert polynomials were discovered by A. Lascoux and M. Sch\"utzenberger in the study of cohomology rings of flag manifolds in 1980's. These polynomials generalize Schur polynomials, and form a linear basis of multivariate polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials, which generalize skew Schur polynomials, and expand in the Schubert basis with the generalized Littlewood-Richardson coefficients. In this paper we initiate the study of these two families of polynomials from the perspective of computational complexity theory. We first observe that skew Schubert polynomials, and therefore Schubert polynomials, are in \CountP (when evaluating on non-negative integral inputs) and \VNP. Our main result is a deterministic algorithm that computes the expansion of a polynomial ff of degree dd in Z[x1,…,xn]\Z[x_1, \dots, x_n] in the basis of Schubert polynomials, assuming an oracle computing Schubert polynomials. This algorithm runs in time polynomial in nn, dd, and the bit size of the expansion. This generalizes, and derandomizes, the sparse interpolation algorithm of symmetric polynomials in the Schur basis by Barvinok and Fomin (Advances in Applied Mathematics, 18(3):271--285). In fact, our interpolation algorithm is general enough to accommodate any linear basis satisfying certain natural properties. Applications of the above results include a new algorithm that computes the generalized Littlewood-Richardson coefficients.Comment: 20 pages; some typos correcte

    Classical simulations of Abelian-group normalizer circuits with intermediate measurements

    Full text link
    Quantum normalizer circuits were recently introduced as generalizations of Clifford circuits [arXiv:1201.4867]: a normalizer circuit over a finite Abelian group GG is composed of the quantum Fourier transform (QFT) over G, together with gates which compute quadratic functions and automorphisms. In [arXiv:1201.4867] it was shown that every normalizer circuit can be simulated efficiently classically. This result provides a nontrivial example of a family of quantum circuits that cannot yield exponential speed-ups in spite of usage of the QFT, the latter being a central quantum algorithmic primitive. Here we extend the aforementioned result in several ways. Most importantly, we show that normalizer circuits supplemented with intermediate measurements can also be simulated efficiently classically, even when the computation proceeds adaptively. This yields a generalization of the Gottesman-Knill theorem (valid for n-qubit Clifford operations [quant-ph/9705052, quant-ph/9807006] to quantum circuits described by arbitrary finite Abelian groups. Moreover, our simulations are twofold: we present efficient classical algorithms to sample the measurement probability distribution of any adaptive-normalizer computation, as well as to compute the amplitudes of the state vector in every step of it. Finally we develop a generalization of the stabilizer formalism [quant-ph/9705052, quant-ph/9807006] relative to arbitrary finite Abelian groups: for example we characterize how to update stabilizers under generalized Pauli measurements and provide a normal form of the amplitudes of generalized stabilizer states using quadratic functions and subgroup cosets.Comment: 26 pages+appendices. Title has changed in this second version. To appear in Quantum Information and Computation, Vol.14 No.3&4, 201
    • …
    corecore