Cryptology ePrint Archive
Not a member yet
    24071 research outputs found

    Tempo: ML-KEM to PAKE Compiler Resilient to Timing Attacks

    Get PDF
    Recent KEM-to-PAKE compilers follow the Encrypted Key Exchange (EKE) paradigm (or a variant thereof), where the KEM public key is password-encrypted. While constant-time implementations of KEMs typically avoid secret-dependent branches and memory accesses, this requirement does not usually extend to operations involving the expansion of the public key because public keys are generally assumed to be public. A notable example is ML-KEM\mathsf{ML\textrm{-}KEM}, which expands a short seed ρ\rho into a large matrix A\mathsf{A} of polynomial coefficients using rejection sampling---a process that is variable-time but usually does not depend on any secret. However, in PAKE protocols that password-encrypt the compressed public key, this introduces the risk of timing honest parties and mounting an offline dictionary attack against the measurement. This is particularly concerning given the well-known real-world impact of such attacks on PAKE protocols. In this paper we show two approaches which yield ML-KEM\mathsf{ML\textrm{-}KEM}-based PAKEs that resist timing attacks. First, we explore constant-time alternatives to ML-KEM\mathsf{ML\textrm{-}KEM} rejection sampling: one that refactors the original SampleNTT\mathsf{SampleNTT} algorithm into constant-time style code, whilst preserving its functionality, and two that modify the matrix expansion procedure to abandon rejection sampling and rely instead on large-integer modular arithmetic. All the proposed constant-time algorithms are slower than the current rejection sampling implementations, but they are still reasonably fast in absolute terms. Our conclusion is that adopting constant-time methods will imply both performance penalties and difficulties in using off-the-shelf ML-KEM\mathsf{ML\textrm{-}KEM} implementations. Alternatively, we present the first ML-KEM\mathsf{ML\textrm{-}KEM}-to-PAKE compiler that mitigates this issue by design: our proposal transmits the seed ρ\rho in the clear, decoupling password-dependent runtime variations from the matrix expansion step. This means that vanilla implementations of ML-KEM\mathsf{ML\textrm{-}KEM} can be used as a black-box. Our new protocol Tempo\mathsf{Tempo} builds on the ideas from CHIC\mathsf{CHIC}, which considered splitting the KEM public key, adopts the two-round Feistel approach for password encryption of the non-expandable part of the public key, and leverages the proof techniques from NoIC\mathsf{NoIC} to show that, despite the malleability permitted by the two-round Feistel, it is sufficient for password extraction and protocol simulation in the UC framework

    A Framework for Witness Encryption from Linearly Verifiable SNARKs and Applications

    Get PDF
    Witness Encryption (WE) is a powerful cryptographic primitive, enabling applications that would otherwise appear infeasible. While general-purpose WE requires strong cryptographic assumptions, and is highly inefficient, recent works have demonstrated that it is possible to design special-purpose WE schemes for targeted applications that can be built from weaker assumptions and can also be concretely efficient. Despite the plethora of constructions in the literature that (implicitly) use witness encryption schemes, there has been no systematic study of special purpose witness encryption schemes. In this work we make progress towards this goal by designing a modular and extensible framework, which allows us to better understand existing schemes and further enables us to construct new witness encryption schemes. The framework is designed around simple but powerful building blocks that we refer to as gadgets . Gadgets can be thought of as witness encryption schemes for small targeted relations (induced by linearly verifiable arguments) but they can be composed with each other to build larger, more expressive relations that are useful in applications. To highlight the power of our framework we methodically recover past results, improve upon them and even provide new feasibility results. The first application of our framework is a Registered Attribute-Based Encryption Scheme [Hohenberger et al. (Eurocrypt 23)] with linear sized common reference string (CRS). Numerous Registered Attribute-Based Encryption (R-ABE) constructions have introduced though a black-box R-ABE construction with a linear--in the number of users--CRS has been a persistent open problem, with the state-of-the-art concretely being N^{1.58} (Garg et al. [GLWW, CRYPTO 24]). Empowered by our Witness Encryption framework we provide the first construction of black-box R-ABE with linear-sized CRS. Our construction is based on a novel realization of encryption for DNF formulas that leverages encryption for set membership. Our second application is a feasibility result for Registered Threshold Encryption (RTE) with succinct ciphertexts. RTE (Branco et al. [ASIACRYPT 2024] is an analogue of the recently introduced Silent Threshold Encryption (Garg et al. [GKPW, CRYPTO 24]) in the Registered Setting. We revisit Registered Threshold Encryption and provide an efficient construction, with constant-sized encryption key and ciphertexts, that makes use of our WE framework

    Introducing two ROS attack variants: breaking one-more unforgeability of BZ blind signatures

    Get PDF
    In 2023, Barreto and Zanon proposed a three-round Schnorr-like blind signature scheme, leveraging zero-knowledge proofs to produce one-time signatures as an intermediate step of the protocol. The resulting scheme, called BZ, is proven secure in the discrete-logarithm setting under the one-more discrete logarithm assumption with (allegedly) resistance to the Random inhomogeneities in a Overdetermined Solvable system of linear equations modulo a prime number pp attack, commonly referred to as ROS attack. The authors argue that the scheme is resistant against a ROS-based attack by building an adversary whose success depends on extracting the discrete logarithm of the intermediate signing key. In this paper, however, we describe a distinct ROS attack on the BZ scheme, in which a probabilistic polynomial-time attacker can bypass the zero-knowledge proof step to break the one-more unforgeability of the scheme. We also built a BZ variant that, by using one secure hash function instead of two, can prevent this particular attack. Unfortunately, though, we show yet another ROS attack that leverages the BZ scheme\u27s structure to break the one-more unforgeability principle again, thus revealing that this variant is also vulnerable. These results indicate that, like other Schnorr-based strategies, it is hard to build a secure blind signature scheme using BZ\u27s underlying structure

    Picking up the Fallen Mask: Breaking and Fixing the RS-Mask Countermeasure

    Get PDF
    Physical attacks pose a major challenge to the secure implementation of cryptographic algorithms. Although significant progress has been made in countering passive attacks such as side-channel analysis (SCA), protection against fault attacks is still less developed. One reason for this is the broader and more complex nature of fault attacks, which makes it difficult to create standardized fault evaluation methodologies for countermeasures like those used for SCA. This makes it easier to overlook potential vulnerabilities that attackers could exploit. RS-Mask, published at HOST 2020, is such a countermeasure that has been affected by the absence of a systematic analysis method. The fundamental concept behind the countermeasure is to maintain a uniform distribution of variables, regardless of whether they are faulty or correct. This property is particularly effective against Statistical Ineffective Fault Attacks (SIFA), which exploit the dependency between fault propagation and the secret data. In this work, we present several fault scenarios involving single fault injections on the AES implementation protected with RS-Mask, where the fault propagation depends on the secret data. This happens because the random space mapping used in RS-Mask countermeasure retains a dependency on the secret data, as it is derived based on the S-box input. To address this, we propose a new countermeasure based on the core concept of RS-Mask, implementing a single mapping for all S-box inputs, involving an intrinsic duplication. Next, we evaluate the effectiveness of the new countermeasure against fault attacks by comparing the fault detection rate across all possible fault locations and values for every input. Additionally, we examine the output differences between faulty and correct outputs for each input. Our results show that the detection rate is uniform for each input, which ensures security against statistical attacks utilizing both effective and ineffective faults. Moreover, the output differences being uniform for each input ensures security against differential fault attacks

    SoK: Deep Learning-based Side-channel Analysis Trends and Challenges

    Get PDF
    Deep learning-based side-channel analysis (DLSCA) represents a powerful paradigm for running side-channel attacks. DLSCA in a state-of-the-art can break multiple targets with only a single attack trace, requiring minimal feature engineering. As such, DLSCA also represents an extremely active research domain for both industry and academia. At the same time, due to domain activity, it becomes more difficult to understand what the current trends and challenges are. In this systematization of knowledge, we provide a critical outlook on a number of developments in DLSCA in the last year, allowing us to offer concrete suggestions. Moreover, we examine the reproducibility perspective, finding that many works still struggle to provide results that can be used by the community

    In the Vault, But Not Safe: Exploring the Threat of Covert Password Manager Providers

    Get PDF
    Password managers have gained significant popularity and are widely recommended as an effective means of enhancing user security. However, current cloud-based architectures assume that password manager providers are trusted entities. This assumption is never questioned because such password managers are operated by their own designers, which are therefore judge and jury. This exposes users to significant risks, as a malicious provider could perform covert actions without being detected to access or alter users\u27 credentials. This exposes users to significant risks, as a malicious provider could perform covert actions without being detected to access or alter the credentials of users. Most password managers rely solely on the strength of a user-chosen master password. As a result, a covert adversary could conceivably perform large-scale offline attacks to recover credentials protected by weak master passwords. Even more concerning, some password managers do not encrypt credentials on users\u27 devices, transmitting them in plaintext before encrypting them server-side, e.g., Google, in its default configuration. On the other hand, key-protected password managers, e.g., KeePassXC, are less commonly used, as they lack functionality for synchronizing credentials across multiple devices. In this paper, we establish a comprehensive set of security properties that should be guaranteed by any cloud-based password manager. We demonstrate that none of the widely deployed mainstream password managers fulfill these fundamental requirements. Nevertheless, we argue that it is feasible to design a solution that is resilient against covert adversaries while allowing users to synchronize their credentials across devices. To support our claims, we propose a password manager design that fulfills all the required properties

    ABE Cubed: Advanced Benchmarking Extensions for ABE Squared

    Get PDF
    Since attribute-based encryption (ABE) was proposed in 2005, it has established itself as a valuable tool in the enforcement of access control. For practice, it is important that ABE satisfies many desirable properties such as multi-authority and negations support. Nowadays, we can attain these properties simultaneously, but none of these schemes have been implemented. Furthermore, although simpler schemes have been optimized extensively on a structural level, there is still much room for improvement for these more advanced schemes. However, even if we had schemes with such structural improvements, we would not have a way to benchmark and compare them fairly to measure the effect of such improvements. The only framework that aims to achieve this goal, ABE Squared (TCHES \u2722), was designed with simpler schemes in mind. In this work, we propose the ABE Cubed framework, which provides advanced benchmarking extensions for ABE Squared. To motivate our framework, we first apply structural improvements to the decentralized ciphertext-policy ABE scheme supporting negations presented by Riepel, Venema and Verma (ACM CCS \u2724), which results in five new schemes with the same properties. We use these schemes to uncover and bridge the gaps in the ABE Squared framework. In particular, we observe that advanced schemes depend on more variables that affect the schemes\u27 efficiency in different dimensions. Whereas ABE Squared only considered one dimension (as was sufficient for the schemes considered there), we devise a benchmarking strategy that allows us to analyze the schemes in multiple dimensions. As a result, we obtain a more complete overview on the computational efficiency of the schemes, and ultimately, this allows us to make better-founded choices about which schemes provide the best efficiency trade-offs for practice

    Randomized Agreement, Verifiable Secret Sharing and Multi-Party Computation in Granular Synchrony

    Get PDF
    Granular Synchrony (Giridharan et al. DISC 2024) is a new network model that unifies the classic timing models of synchrony and asynchrony. The network is viewed as a graph consisting of a mixture of synchronous, eventually synchronous, and asynchronous communication links. It has been shown that Granular Synchrony allows deterministic Byzantine agreement protocols to achieve a corruption threshold in between complete synchrony and complete asynchrony if and only if the network graph satisfies the right condition, namely, that no two groups of honest parties of size n2tn-2t can be partitioned from each other. In this work, we show that the same network condition is also tight for Agreement on a Common Subset (ACS), Verifiable Secret Sharing (VSS), and secure Multi-Party Computation (MPC) with guaranteed output delivery, when the corruption threshold is between one-third and one-half. Our protocols are randomized and assume that all links are either synchronous or asynchronous. %(no partially synchronous links are needed). Our ACS protocol incurs an amortized communication cost of O(n3λ)O(n^3\lambda) bits per input, and our VSS and MPC protocols incur amortized communication costs of O(n3)O(n^3) and O(n4)O(n^4) field elements per secret and per multiplication gate, respectively. To design our protocols, we also construct protocols for Reliable Broadcast and Externally Valid Byzantine Agreement (EVBA), which are of independent interest

    Compressing steganographic payloads with LLM assistance

    Get PDF
    Steganography is the practice of concealing messages or information within other non-secret text or media to avoid detection. A central challenge in steganography is balancing payload size with detectability and media constraints—larger payloads increase the risk of detection and require proportionally larger or higher-capacity carriers. In this paper, we introduce a novel approach that combines Huffman coding, suitable dictionary identification, and large language models (LLMs) rephrasing techniques to significantly reduce payload size. This enables more efficient use of limited-capacity carriers, such as images, while minimizing the visual or statistical footprint. Our method allows for the embedding of larger payloads into fixed-size media, addressing a key bottleneck in traditional steganographic systems. By optimizing payload compression prior to encoding, we improve both the stealth and scalability of steganographic communication

    Randomized Distributed Function Computation (RDFC): Ultra-Efficient Semantic Communication Applications to Privacy

    Get PDF
    We establish the randomized distributed function computation (RDFC) framework, in which a sender transmits just enough information for a receiver to generate a randomized function of the input data. Describing RDFC as a form of semantic communication, which can be essentially seen as a generalized remote‑source‑coding problem, we show that security and privacy constraints naturally fit this model, as they generally require a randomization step. Using strong coordination metrics, we ensure (local differential) privacy for every input sequence and prove that such guarantees can be met even when no common randomness is shared between the transmitter and receiver. This work provides lower bounds on Wyner\u27s common information (WCI), which is the communication cost when common randomness is absent, and proposes numerical techniques to evaluate the other corner point of the RDFC rate region for continuous‑alphabet random variables with unlimited shared randomness. Experiments illustrate that a sufficient amount of common randomness can reduce the semantic communication rate by up to two orders of magnitude compared to the WCI point, while RDFC without any shared randomness still outperforms lossless transmission by a large margin. A finite blocklength analysis further confirms that the privacy parameter gap between the asymptotic and non-asymptotic RDFC methods closes exponentially fast with input length. Our results position RDFC as an energy-efficient semantic communication strategy for privacy‑aware distributed computation systems

    22,819

    full texts

    24,071

    metadata records
    Updated in last 30 days.
    Cryptology ePrint Archive
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇