137 research outputs found

    Replacing Probability Distributions in Security Games via Hellinger Distance

    Get PDF
    Security of cryptographic primitives is usually proved by assuming "ideal" probability distributions. We need to replace them with approximated "real" distributions in the real-world systems without losing the security level. We demonstrate that the Hellinger distance is useful for this problem, while the statistical distance is mainly used in the cryptographic literature. First, we show that for preserving ?-bit security of a given security game, the closeness of 2^{-?/2} to the ideal distribution is sufficient for the Hellinger distance, whereas 2^{-?} is generally required for the statistical distance. The result can be applied to both search and decision primitives through the bit security framework of Micciancio and Walter (Eurocrypt 2018). We also show that the Hellinger distance gives a tighter evaluation of closeness than the max-log distance when the distance is small. Finally, we show that the leftover hash lemma can be strengthened to the Hellinger distance. Namely, a universal family of hash functions gives a strong randomness extractor with optimal entropy loss for the Hellinger distance. Based on the results, a ?-bit entropy loss in randomness extractors is sufficient for preserving ?-bit security. The current understanding based on the statistical distance is that a 2?-bit entropy loss is necessary

    Improved Asymptotic Bounds for Codes Correcting Insertions and Deletions

    Full text link
    This paper studies the cardinality of codes correcting insertions and deletions. We give an asymptotically improved upper bound on code size. The bound is obtained by utilizing the asymmetric property of list decoding for insertions and deletions.Comment: 9 pages, 2 fugure

    Uncorrectable Errors of Weight Half the Minimum Distance for Binary Linear Codes

    Full text link
    A lower bound on the number of uncorrectable errors of weight half the minimum distance is derived for binary linear codes satisfying some condition. The condition is satisfied by some primitive BCH codes, extended primitive BCH codes, Reed-Muller codes, and random linear codes. The bound asymptotically coincides with the corresponding upper bound for Reed-Muller codes and random linear codes. By generalizing the idea of the lower bound, a lower bound on the number of uncorrectable errors for weights larger than half the minimum distance is also obtained, but the generalized lower bound is weak for large weights. The monotone error structure and its related notion larger half and trial set, which are introduced by Helleseth, Kl{\o}ve, and Levenshtein, are mainly used to derive the bounds.Comment: 5 pages, to appear in ISIT 200

    Public-Key Encryption with Lazy Parties

    Get PDF
    In a public-key encryption scheme, if a sender is not concerned about the security of a message and is unwilling to generate costly randomness, the security of the encrypted message can be compromised. In this work, we characterize such \emph{lazy parties}, who are regraded as honest parties, but are unwilling to perform a costly task when they are not concerned about the security. Specifically, we consider a rather simple setting in which the costly task is to generate randomness used in algorithms, and parties can choose either perfect randomness or a fixed string. We model lazy parties as rational players who behave rationally to maximize their utilities, and define a security game between the parties and an adversary. Since a standard secure encryption scheme does not work in the setting, we provide constructions of secure encryption schemes in various settings

    Practical Card-Based Protocol for Three-Input Majority

    Get PDF
    We present a card-based protocol for computing a three-input majority using six cards. The protocol essentially consists of performing a simple XOR protocol two times. Compared to the existing protocols, our protocol does not require private operations other than choosing cards

    Bit-Security Preserving Hardness Amplification

    Get PDF
    Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to 11) to very hard 1/2+ε1/2 + \varepsilon by inducing ε2\varepsilon^2 multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause the bit-security loss by the factor of log(1/ε)\log(1/\varepsilon). To resolve this issue, we derive a new variant of the XOR lemma in terms of the R\\u27enyi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the \bot symbol in addition to 00 and 11, which may be of independent interest
    corecore