16 research outputs found

    vCNN: Verifiable Convolutional Neural Network based on zk-SNARKs

    Get PDF
    With the development of AI systems, services using them expand to various applications. The widespread adoption of AI systems relies substantially on the ability to trust their output. Therefore, it is becoming important for a client to be able to check whether the AI inference services have been correctly calculated. Since the weight value in a CNN model is an asset of service providers, the client should be able to check the correctness of the result without the weight value. Furthermore, when the result is checked by a third party, it should be possible to verify the correctness even without the user’s input data. Fortunately, zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs) allow to verify the result without input and weight values. However, the proving time in zk-SNARKs is too slow to be applied to real AI applications. This paper proposes a new efficient verifiable convolutional neural network (vCNN) framework which accelerates the proving performance tremendously. To increase the proving performance, we propose a new efficient relation representation for convolution equations. While the proving complexity of convolution is O(ln) in the existing zk-SNARK approaches, it reduces to O(l + n) in the proposed approach where l and n denote the size of kernel and the data in CNNs. Experimental results show that the proposed vCNN improves prove performance by 20 fold for a simple MNIST and 18000 fold for VGG16. The security of the proposed scheme is proven formally

    Biscuit: New MPCitH Signature Scheme from Structured Multivariate Polynomials

    Get PDF
    This paper describes Biscuit, a new multivariate-based signature scheme derived using the MPCitH approach. The security of Biscuit is related to the problem of solving a set of quadratic structured systems of algebraic equations. These equations are highly compact and can be evaluated using very few multiplications. The core of Biscuit is a rather simple MPC protocol which consists of the parallel execution of a few secure multiplications using standard optimized multiplicative triples. This paper also includes several improvements with respect to Biscuit submission to the last NIST PQC standardization process for additional signature schemes. Notably, we introduce a new hypercube variant of Biscuit, refine the security analysis with recent third-party attacks, and present a new avx2 implementation of Biscuit

    ClaimChain: Improving the Security and Privacy of In-band Key Distribution for Messaging

    Get PDF
    The social demand for email end-to-end encryption is barely supported by mainstream service providers. Autocrypt is a new community-driven open specification for e-mail encryption that attempts to respond to this demand. In Autocrypt the encryption keys are attached directly to messages, and thus the encryption can be implemented by email clients without any collaboration of the providers. The decentralized nature of this in-band key distribution, however, makes it prone to man-in-the-middle attacks and can leak the social graph of users. To address this problem we introduce ClaimChain, a cryptographic construction for privacy-preserving authentication of public keys. Users store claims about their identities and keys, as well as their beliefs about others, in ClaimChains. These chains form authenticated decentralized repositories that enable users to prove the authenticity of both their keys and the keys of their contacts. ClaimChains are encrypted, and therefore protect the stored information, such as keys and contact identities, from prying eyes. At the same time, ClaimChain implements mechanisms to provide strong non-equivocation properties, discouraging malicious actors from distributing conflicting or inauthentic claims. We implemented ClaimChain and we show that it offers reasonable performance, low overhead, and authenticity guarantees.Comment: Appears in 2018 Workshop on Privacy in the Electronic Society (WPES'18

    A Touch of Evil: High-Assurance Cryptographic Hardware from Untrusted Components

    Get PDF
    The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure crypto-coprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added

    Weakly Extractable One-Way Functions

    Get PDF
    A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary’s code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography’s gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded

    Individual Cryptography

    Get PDF
    We initiate a formal study of individual cryptography. Informally speaking, an algorithm Alg\mathsf{Alg} is individual if, in every implementation of Alg\mathsf{Alg}, there always exists an individual user with full knowledge of the cryptographic data SS used by Alg\mathsf{Alg}. In particular, it should be infeasible to design implementations of this algorithm that would hide SS by distributing it between a group of parties using an MPC protocol or outsourcing it to a trusted execution environment. We define and construct two primitives in this model. The first one, called proofs of individual knowledge , is a tool for proving that a given message is fully known to a single ( individual ) machine on the Internet, i.e., it cannot be shared between a group of parties. The second one, dubbed individual secret sharing , is a scheme for sharing a secret SS between a group of parties so that the parties have no knowledge of SS as long as they do not reconstruct it. The reconstruction ensures that if the shareholders attempt to collude, one of them will learn the secret entirely. Individual secret sharing has applications for preventing collusion in secret sharing. A central technique for constructing individual cryptographic primitives is the concept of MPC hardness. MPC hardness precludes an adversary from completing a cryptographic task in a distributed fashion within a specific time frame

    Success Probability of Multiple/Multidimensional Linear Cryptanalysis Under General Key Randomisation Hypotheses

    Get PDF
    This work considers statistical analysis of attacks on block ciphers using several linear approximations. A general and unified approach is adopted. To this end, the general key randomisation hypotheses for multidimensional and multiple linear cryptanalysis are introduced. Expressions for the success probability in terms of the data complexity and the advantage are obtained using the general key randomisation hypotheses for both multidimensional and multiple linear cryptanalysis and under the settings where the plaintexts are sampled with or without replacement. Particularising to standard/adjusted key randomisation hypotheses gives rise to success probabilities in 16 different cases out of which in only five cases expressions for success probabilities have been previously reported. Even in these five cases, the expressions for success probabilities that we obtain are more general than what was previously obtained. A crucial step in the analysis is the derivation of the distributions of the underlying test statistics. While we carry out the analysis formally to the extent possible, there are certain inherently heuristic assumptions that need to be made. In contrast to previous works which have implicitly made such assumptions, we carefully highlight these and discuss why they are unavoidable. Finally, we provide a complete characterisation of the dependence of the success probability on the data complexity
    corecore