Cryptology ePrint Archive
Not a member yet
    23056 research outputs found

    Perfectly-secure Network-agnostic MPC with Optimal Resiliency

    Get PDF
    We study network-agnostic secure multiparty computation with perfect security. Traditionally MPC is studied assuming the underlying network is either synchronous or asynchronous. In a network-agnostic setting, the parties are unaware of whether the underlying network is synchronous or asynchronous. The feasibility of perfectly-secure MPC in synchronous and asynchronous networks has been settled a long ago. The landmark work of [Ben-Or, Goldwasser, and Wigderson, STOC\u2788] shows that n>3tsn > 3t_s is necessary and sufficient for any MPC protocol with nn-parties over synchronous network tolerating tst_s active corruptions. In yet another foundational work, [Ben-Or, Canetti, and Goldreich, STOC\u2793] show that the bound for asynchronous network is n>4tan > 4t_a, where tat_a denotes the number of active corruptions. However, the same question remains unresolved for network-agnostic setting till date. In this work, we resolve this long-standing question. We show that perfectly-secure network-agnostic nn-party MPC tolerating tst_s active corruptions when the network is synchronous and tat_a active corruptions when the network is asynchronous is possible if and only if n>2max(ts,ta)+max(2ta,ts)n > 2 \max(t_s,t_a) + \max(2t_a,t_s). When tatst_a \geq t_s, our bound reduces to n>4tan > 4t_a, whose tightness follows from the known feasibility results for asynchronous MPC. When ts>tat_s > t_a, our result gives rise to a new bound of n>2ts+max(2ta,ts)n > 2t_s + \max(2t_a,t_s). Notably, the previous network-agnostic MPC in this setting [Appan, Chandramouli, and Choudhury, PODC\u2722] only shows sufficiency for a loose bound of n>3ts+tan > 3t_s + t_a. When ts>2tat_s > 2t_a, our result shows tightness of n>3ts n > 3t_s, whereas the existing work shows sufficiency for n>3ts+tan > 3t_s+t_a

    Trapdoor Hash Functions and PIR from Low-Noise LPN

    Get PDF
    Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function ff, a hash on xx together with a (small) input encoding allow one to recover f(x)f(x). TDHs are a versatile tool and a useful building block for more complex cryptographic protocols. In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate ϵ=O(log1+βn/n)\epsilon = O(\log^{1+\beta} n / n) for β>0\beta>0, i.e., in the so-called low-noise regime. The construction achieves 2Θ(log1βλ)2^{\Theta(\log^{1-\beta} \lambda)} compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity L/2Θ(log1βL)L / 2^{\Theta(\log^{1-\beta} L)}, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than LL) from any code-based assumption

    Blockchain-based Secure D2D localisation with adaptive precision

    Get PDF
    In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus

    CT-LLVM: Automatic Large-Scale Constant-Time Analysis

    Get PDF
    Constant-time (CT) is a popular programming discipline to protect cryptographic libraries against micro-architectural timing attacks. One appeal of the CT discipline lies in its conceptual simplicity: a program is CT iff it has no secret-dependent data-flow, control-flow or variable-timing operation. Thanks to its simplicity, the CT discipline is supported by dozens of analysis tools. However, a recent user study demonstrates that these tools are seldom used due to poor usability and maintainability (Jancar et al. IEEE SP 2022). In this paper, we introduce CT-LLVM, a CT analysis tool designed for usability, maintainability and automatic large-scale analysis. Concretely, CT-LLVM is packaged as a LLVM plugin and is built as a thin layer on top of two standard LLVM analysis: def-use and alias analysis. Besides confirming known CT violations, we demonstrate the usability and scalability of CT-LLVM by automatically analyzing nine cryptographic libraries. On average, CT-LLVM can automatically and soundly analyze 36% of the functions in these libraries, proving that 61% of them are CT. In addition, the large-scale automatic analysis also reveals new vulnerabilities in these libraries. In the end, we demonstrate that CT-LLVM helps systematically mitigate compiler-introduced CT violations, which has been a long-standing issue in CT analysis

    Symmetric Perceptrons, Number Partitioning and Lattices

    Get PDF
    The symmetric binary perceptron (SBPκ\mathrm{SBP}_{\kappa}) problem with parameter κ:R1[0,1]\kappa : \mathbb{R}_{\geq1} \to [0,1] is an average-case search problem defined as follows: given a random Gaussian matrix AN(0,1)n×m\mathbf{A} \sim \mathcal{N}(0,1)^{n \times m} as input where mnm \geq n, output a vector x{1,1}m\mathbf{x} \in \{-1,1\}^m such that Axκ(m/n)m .|| \mathbf{A} \mathbf{x} ||_{\infty} \leq \kappa(m/n) \cdot \sqrt{m}~. The number partitioning problem (NPPκ\mathrm{NPP}_{\kappa}) corresponds to the special case of setting n=1n=1. There is considerable evidence that both problems exhibit large computational-statistical gaps. In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices. For SBPκ\mathrm{SBP}_\kappa, statistically, solutions exist with κ(x)=2Θ(x)\kappa(x) = 2^{-\Theta(x)} (Aubin, Perkins and Zdeborova, Journal of Physics 2019). For large nn, the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely κ(x)=Θ(1/x)\kappa(x) = \Theta(1/\sqrt{x}) (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kizildag, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, κ(x)=Θ~(1/x)\kappa(x) = \widetilde{\Theta}(1/\sqrt{x}) is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices. For NPPκ\mathrm{NPP}_\kappa, statistically, solutions exist with κ(m)=Θ(2m)\kappa(m) = \Theta(2^{-m}) (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp\u27s classical differencing algorithm achieves κ(m)=2O(log2m) .\kappa(m) = 2^{-O(\log^2 m)}~. We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve κ(m)=2Ω(log3m)\kappa(m) = 2^{-\Omega(\log^3 m)}, once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor. Our hardness results are versatile, and hold with respect to different distributions of the matrix A\mathbf{A} (e.g., i.i.d. uniform entries from [0,1][0,1]) and weaker requirements on the solution vector x\mathbf{x}

    Arbitrary-Threshold Fully Homomorphic Encryption with Lower Complexity

    Get PDF
    Threshold fully homomorphic encryption (ThFHE) enables multiple parties to compute functions over their sensitive data without leaking data privacy. Most of existing ThFHE schemes are restricted to full threshold and require the participation of all parties to output computing results. Compared with these full-threshold schemes, arbitrary threshold (ATh)-FHE schemes are robust to non-participants and can be a promising solution to many real-world applications. However, existing AThFHE schemes are either inefficient to be applied with a large number of parties NN and a large data size KK, or insufficient to tolerate all types of non-participants. In this paper, we propose an AThFHE scheme to handle all types of non-participants with lower complexity over existing schemes. At the core of our scheme is the reduction from AThFHE construction to the design of a new primitive called approximate secret sharing (ApproxSS). Particularly, we formulate ApproxSS and prove the correctness and security of AThFHE on top of arbitrary-threshold (ATh)-ApproxSS\u27s properties. Such a reduction reveals that existing AThFHE schemes implicitly design ATh-ApproxSS following a similar idea called ``noisy share\u27\u27. Nonetheless, their ATh-ApproxSS design has high complexity and become the performance bottleneck. By developing ATASSES, an ATh-ApproxSS scheme based on a novel ``encrypted share\u27\u27 idea, we reduce the computation (resp. communication) complexity from O(N2K)\mathcal{O}(N^2K) to O(N2+K)\mathcal{O}(N^2+K) (resp. from O(NK)\mathcal{O}(NK) to O(N+K)\mathcal{O}(N+K)). We not only theoretically prove the (approximate) correctness and security of ATASSES, but also empirically evaluate its efficiency against existing baselines. Particularly, when applying to a system with one thousand parties, ATASSES achieves a speedup of 3.83×3.83\times -- 15.4×15.4\times over baselines

    AI for Code-based Cryptography

    Get PDF
    We introduce the use of machine learning in the cryptanalysis of code-based cryptography. Our focus is on distinguishing problems related to the security of NIST round-4 McEliece-like cryptosystems, particularly for Goppa codes used in ClassicMcEliece and Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes used in BIKE. We present DeepDistinguisher, a new algorithm for distinguishing structured codes from random linear codes that uses a transformer. The results show that the new distinguisher achieves a high level of accuracy in distinguishing Goppa codes, suggesting that their structure may be more recognizable by AI models. Our approach outperforms traditional attacks in distinguishing Goppa codes in certain settings and does generalize to larger code lengths without further training using a puncturing technique. We also present the first distinguishing results dedicated to MDPC and QC-MDPC codes

    A Comprehensive Formal Security Analysis of OPC UA

    Get PDF
    OPC UA is a standardized Industrial Control System (ICS) protocol, deployed in critical infrastructures, that aims to ensure security. The forthcoming version 1.05 includes major changes in the underlying cryptographic design, including a Diffie-Hellmann based key exchange, as opposed to the previous RSA based version. Version 1.05 is supposed to offer stronger security, including Perfect Forward Secrecy (PFS). We perform a formal security analysis of the security protocols specified in OPC UA v1.05 and v1.04, for the RSA-based and the new DH-based mode, using the state-of-the-art symbolic protocol verifier ProVerif. Compared to previous studies, our model is much more comprehensive, including the new protocol version, combination of the different sub-protocols for establishing secure channels, sessions and their management, covering a large range of possible configurations. This results in one of the largest models ever studied in ProVerif raising many challenges related to its verification mainly due to the complexity of the state machine. We discuss how we mitigated this complexity to obtain meaningful analysis results. Our analysis uncovered several new vulnerabilities, that have been reported to and acknowledged by the OPC Foundation. We designed and proposed provably secure fixes, most of which are included in the upcoming version of the standard

    Adaptor Signatures: New Security Definition and A Generic Construction for NP Relations

    Get PDF
    An adaptor signatures (AS) scheme is an extension of digital signatures that allows the signer to generate a pre-signature for an instance of a hard relation. This pre-signature can later be adapted to a full signature with a corresponding witness. Meanwhile, the signer can extract a witness from both the pre-signature and the signature. AS have recently garnered more attention due to its scalability and interoperability. Dai et al. [INDOCRYPT 2022] proved that AS can be constructed for any NP relation using a generic construction. However, their construction has a shortcoming: the associated witness is exposed by the adapted signature. This flaw poses limits the applications of AS, even in its motivating setting, i.e., blockchain, where the adapted signature is typically uploaded to the blockchain and is public to everyone. To address this issue, in this work we augment the security definition of AS by a natural property which we call witness hiding. We then prove the existence of AS for any NP relation, assuming the existence of one-way functions. Concretely, we propose a generic construction of witness-hiding AS from signatures and a weak variant of trapdoor commitments, which we term trapdoor commitments with a specific adaptable message. We instantiate the latter based on the Hamiltonian cycle problem. Since the Hamiltonian cycle problem is NP-complete, we can obtain witness hiding adaptor signatures for any NP relation

    Juggernaut: Efficient Crypto-Agnostic Byzantine Agreement

    Get PDF
    It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of t<n/2t<n/2 corruptions, bypassing the setup-free t<n/3t<n/3 barrier. Alas, the overwhelming majority of protocols in the literature have the caveat that their security crucially hinges on the security of the cryptography and setup, to the point where if the cryptography is broken, even a single corrupted party can violate the security of the protocol. Thus these protocols provide higher corruption resilience (n/2n/2 instead of n/3n/3) for the price of increased assumptions. Is this trade-off necessary? We further the study of crypto-agnostic Byzantine agreement among nn parties that answers this question in the negative. Specifically, let tst_s and tit_i denote two parameters such that (1) 2ti+ts<n2t_i + t_s < n, and (2) tits<n/2t_i \leq t_s < n/2. Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to tst_s parties, or (2) the adversary is computationally unbounded and corrupts up to tit_i parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only O(λn2)O(\lambda n^2) bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of nn and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for tin/4t_i \leq n/4

    21,827

    full texts

    23,056

    metadata records
    Updated in last 30 days.
    Cryptology ePrint Archive
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇