218 research outputs found

    Impossibility of Three Pass Protocol using Public Abelian Groups

    Full text link
    Key transport protocols are designed to transfer a secret key from an initiating principal to other entities in a network. The three-pass protocol is a key transport protocol developed by Adi Shamir in 1980 where Alice wants to transport a secret message to Bob over an insecure channel, and they do not have any pre-shared secret information. In this paper, we prove the impossibility of secret key transportation from a principal to another entity in a network by using the three pass protocol over public Abelian groups. If it were possible to employ public Abelian groups to implement the three-pass protocol, we could use it in post-quantum cryptography for transporting keys providing information theoretic security without relying on any computationally difficult problem

    On a New, Efficient Framework for Falsifiable Non-interactive Zero-Knowledge Arguments

    Get PDF
    Et kunnskapslĂžst bevis er en protokoll mellom en bevisfĂžrer og en attestant. BevisfĂžreren har som mĂ„l Ă„ overbevise attestanten om at visse utsagn er korrekte, som besittelse av kortnummeret til et gyldig kredittkort, uten Ă„ avslĂžre noen private opplysninger, som for eksempel kortnummeret selv. I mange anvendelser er det Ăžnskelig Ă„ bruke IIK-bevis (Ikke-interaktive kunnskapslĂžse bevis), der bevisfĂžreren produserer kun en enkelt melding som kan bekreftes av mange attestanter. En ulempe er at sikre IIK-bevis for ikke-trivielle sprĂ„k kun kan eksistere ved tilstedevĂŠrelsen av en pĂ„litelig tredjepart som beregner en felles referansestreng som blir gjort tilgjengelig for bĂ„de bevisfĂžreren og attestanten. NĂ„r ingen slik part eksisterer liter man av og til pĂ„ ikke-interaktiv vitne-uskillbarhet, en svakere form for personvern. Studiet av effektive og sikre IIK-bevis er en kritisk del av kryptografi som har blomstret opp i det siste grunnet anvendelser i blokkjeder. I den fĂžrste artikkelen konstruerer vi et nytt IIK-bevis for sprĂ„kene som bestĂ„r av alle felles nullpunkter for en endelig mengde polynomer over en endelig kropp. Vi demonstrerer nytteverdien av beviset ved flerfoldige eksempler pĂ„ anvendelser. SĂŠrlig verdt Ă„ merke seg er at det er mulig Ă„ gĂ„ nesten automatisk fra en beskrivelse av et sprĂ„k pĂ„ et hĂžyt nivĂ„ til definisjonen av IIK-beviset, som minsker behovet for dedikert kryptografisk ekspertise. I den andre artikkelen konstruerer vi et IIV-bevis ved Ă„ bruke en ny kompilator. Vi utforsker begrepet Kunnskapslydighet (et sterkere sikkerhetsbegrep enn lydighet) for noen konstruksjoner av IIK-bevis. I den tredje artikkelen utvider vi arbeidet fra den fĂžrste artikkelen ved Ă„ konstruere et nytt IIK-bevis for mengde-medlemskap som lar oss bevise at et element ligger, eller ikke ligger, i den gitte mengden. Flere nye konstruksjoner har bedre effektivitet sammenlignet med allerede kjente konstruksjoner.A zero-knowledge proof is a protocol between a prover, and a verifier. The prover aims to convince the verifier of the truth of some statement, such as possessing credentials for a valid credit card, without revealing any private information, such as the credentials themselves. In many applications, it is desirable to use NIZKs (Non-Interactive Zero Knowledge) proofs, where the prover sends outputs only a single message that can be verified by many verifiers. As a drawback, secure NIZKs for non-trivial languages can only exist in the presence of a trusted third party that computes a common reference string and makes it available to both the prover and verifier. When no such party exists, one sometimes relies on non interactive witness indistinguishability (NIWI), a weaker notion of privacy. The study of efficient and secure NIZKs is a crucial part of cryptography that has been thriving recently due to blockchain applications. In the first paper, we construct a new NIZK for the language of common zeros of a finite set of polynomials over a finite field. We demonstrate its usefulness by giving a large number of example applications. Notably, it is possible to go from a high-level language description to the definition of the NIZK almost automatically, lessening the need for dedicated cryptographic expertise. In the second paper, we construct a NIWI using a new compiler. We explore the notion of Knowledge Soundness (a security notion stronger than soundness) of some NIZK constructions. In the third paper, we extended the first paper’s work by constructing a new set (non-)membership NIZK that allows us to prove that an element belongs or does not belong to the given set. Many new constructions have better efficiency compared to already-known constructions.Doktorgradsavhandlin

    Strong Complementarity and Non-locality in Categorical Quantum Mechanics

    Full text link
    Categorical quantum mechanics studies quantum theory in the framework of dagger-compact closed categories. Using this framework, we establish a tight relationship between two key quantum theoretical notions: non-locality and complementarity. In particular, we establish a direct connection between Mermin-type non-locality scenarios, which we generalise to an arbitrary number of parties, using systems of arbitrary dimension, and performing arbitrary measurements, and a new stronger notion of complementarity which we introduce here. Our derivation of the fact that strong complementarity is a necessary condition for a Mermin scenario provides a crisp operational interpretation for strong complementarity. We also provide a complete classification of strongly complementary observables for quantum theory, something which has not yet been achieved for ordinary complementarity. Since our main results are expressed in the (diagrammatic) language of dagger-compact categories, they can be applied outside of quantum theory, in any setting which supports the purely algebraic notion of strongly complementary observables. We have therefore introduced a method for discussing non-locality in a wide variety of models in addition to quantum theory. The diagrammatic calculus substantially simplifies (and sometimes even trivialises) many of the derivations, and provides new insights. In particular, the diagrammatic computation of correlations clearly shows how local measurements interact to yield a global overall effect. In other words, we depict non-locality.Comment: 15 pages (incl. 5 appendix). To appear: LiCS 201

    Authentication of Quantum Messages

    Full text link
    Authentication is a well-studied area of classical cryptography: a sender S and a receiver R sharing a classical private key want to exchange a classical message with the guarantee that the message has not been modified by any third party with control of the communication line. In this paper we define and investigate the authentication of messages composed of quantum states. Assuming S and R have access to an insecure quantum channel and share a private, classical random key, we provide a non-interactive scheme that enables S both to encrypt and to authenticate (with unconditional security) an m qubit message by encoding it into m+s qubits, where the failure probability decreases exponentially in the security parameter s. The classical private key is 2m+O(s) bits. To achieve this, we give a highly efficient protocol for testing the purity of shared EPR pairs. We also show that any scheme to authenticate quantum messages must also encrypt them. (In contrast, one can authenticate a classical message while leaving it publicly readable.) This has two important consequences: On one hand, it allows us to give a lower bound of 2m key bits for authenticating m qubits, which makes our protocol asymptotically optimal. On the other hand, we use it to show that digitally signing quantum states is impossible, even with only computational security.Comment: 22 pages, LaTeX, uses amssymb, latexsym, time

    Anonymous attestation with user-controlled linkability

    Get PDF
    This paper is motivated by the observation that existing security models for direct anonymous attestation (DAA) have problems to the extent that insecure protocols may be deemed secure when analysed under these models. This is particularly disturbing as DAA is one of the few complex cryptographic protocols resulting from recent theoretical advances actually deployed in real life. Moreover, standardization bodies are currently looking into designing the next generation of such protocols. Our first contribution is to identify issues in existing models for DAA and explain how these errors allow for proving security of insecure protocols. These issues are exhibited in all deployed and proposed DAA protocols (although they can often be easily fixed). Our second contribution is a new security model for a class of "pre-DAA scheme", that is, DAA schemes where the computation on the user side takes place entirely on the trusted platform. Our model captures more accurately than any previous model the security properties demanded from DAA by the trusted computing group (TCG), the group that maintains the DAA standard. Extending the model from pre-DAA to full DAA is only a matter of refining the trust models on the parties involved. Finally, we present a generic construction of a DAA protocol from new building blocks tailored for anonymous attestation. Some of them are new variations on established ideas and may be of independent interest. We give instantiations for these building blocks that yield a DAA scheme more efficient than the one currently deployed, and as efficient as the one about to be standardized by the TCG which has no valid security proof. © 2013 Springer-Verlag Berlin Heidelberg

    NIWI and New Notions of Extraction for Algebraic Languages

    Get PDF
    We give an efficient construction of a computational non-interactive witness indistinguishable (NIWI) proof in the plain model, and investigate notions of extraction for NIZKs for algebraic languages. Our starting point is the recent work of Couteau and Hartmann (CRYPTO 2020) who developed a new framework (CH framework) for constructing non-interactive zero-knowledge proofs and arguments under falsifiable assumptions for a large class of languages called algebraic languages. In this paper, we construct an efficient NIWI proof in the plain model for algebraic languages based on the CH framework. In the plain model, our NIWI construction is more efficient for algebraic languages than state-of-the-art Groth-Ostrovsky-Sahai (GOS) NIWI (JACM 2012). Next, we explore knowledge soundness of NIZK systems in the CH framework. We define a notion of strong f-extractability, and show that the CH proof system satisfies this notion. We then put forth a new definition of knowledge soundness called semantic extraction. We explore the relationship of semantic extraction with existing knowledge soundness definitions and show that it is a general definition that recovers black-box and non-black-box definitions as special cases. Finally, we show that NIZKs for algebraic languages in the CH framework cannot satisfy semantic extraction. We extend this impossibility to a class of NIZK arguments over algebraic languages, namely quasi-adaptive NIZK arguments that are constructed from smooth projective hash functions

    Design and Analysis of Opaque Signatures

    Get PDF
    Digital signatures were introduced to guarantee the authenticity and integrity of the underlying messages. A digital signature scheme comprises the key generation, the signature, and the verification algorithms. The key generation algorithm creates the signing and the verifying keys, called also the signer’s private and public keys respectively. The signature algorithm, which is run by the signer, produces a signature on the input message. Finally, the verification algorithm, run by anyone who knows the signer’s public key, checks whether a purported signature on some message is valid or not. The last property, namely the universal verification of digital signatures is undesirable in situations where the signed data is commercially or personally sensitive. Therefore, mechanisms which share most properties with digital signatures except for the universal verification were invented to respond to the aforementioned need; we call such mechanisms “opaque signatures”. In this thesis, we study the signatures where the verification cannot be achieved without the cooperation of a specific entity, namely the signer in case of undeniable signatures, or the confirmer in case of confirmer signatures; we make three main contributions. We first study the relationship between two security properties important for public key encryption, namely data privacy and key privacy. Our study is motivated by the fact that opaque signatures involve always an encryption layer that ensures their opacity. The properties required for this encryption vary according to whether we want to protect the identity (i.e. the key) of the signer or hide the validity of the signature. Therefore, it would be convenient to use existing work about the encryption scheme in order to derive one notion from the other. Next, we delve into the generic constructions of confirmer signatures from basic cryptographic primitives, e.g. digital signatures, encryption, or commitment schemes. In fact, generic constructions give easy-to-understand and easy-to-prove schemes, however, this convenience is often achieved at the expense of efficiency. In this contribution, which constitutes the core of this thesis, we first analyze the already existing constructions; our study concludes that the popular generic constructions of confirmer signatures necessitate strong security assumptions on the building blocks, which impacts negatively the efficiency of the resulting signatures. Next, we show that a small change in these constructionsmakes these assumptions drop drastically, allowing as a result constructions with instantiations that compete with the dedicated realizations of these signatures. Finally, we revisit two early undeniable signatures which were proposed with a conjectural security. We disprove the claimed security of the first scheme, and we provide a fix to it in order to achieve strong security properties. Next, we upgrade the second scheme so that it supports a iii desirable feature, and we provide a formal security treatment of the new scheme: we prove that it is secure assuming new reasonable assumptions on the underlying constituents
    • 

    corecore