51 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Unified View for Notions of Bit Security

    Get PDF
    A theoretical framework of the bit security of cryptographic primitives/games was first introduced in a pioneering work by Micciancio and Walter (Eurocrypt 2018), and an alternative framework was introduced by the authors (Asiacrypt 2021). First, we observe that quantitative results in the latter framework are preserved even if adversaries are allowed to output the failure symbol. With this slight modification, we show that the notion of bit security in the latter framework is equivalent to that in the former framework up to constant bits. Also, we demonstrate that several existing notions of advantages can be captured in a unified way. Based on this equivalence, we show that the reduction algorithm of Hast (J. Cryptology, 2004) gives a tight reduction of the Goldreich-Levin hard-core predicate to the hardness of one-way functions. These two results resolved open problems that remained. Furthermore, in the latter framework, we show that all games we need to care about are decision games. Namely, for every search game G, there is the corresponding decision game G′ such that G has λ-bit security if and only if G′ has λ-bit security. The game G′ consists of the real and the ideal games, where attacks in the ideal game are never approved. Such games often appear in game-hopping security proofs. The result justifies such security proofs because they lose no security. Finally, we provide a distribution replacement theorem. Suppose a game using distribution Q in a black-box manner is λ-bit secure, and two distributions P and Q are computationally λ-bit secure indistinguishable. In that case, the game where Q is replaced by P is also λ-bit secure

    Analyzing massive datasets with missing entries: models and algorithms

    Get PDF
    We initiate a systematic study of computational models to analyze algorithms for massive datasets with missing or erased entries and study the relationship of our models with existing algorithmic models for large datasets. We focus on algorithms whose inputs are naturally represented as functions, codewords, or graphs. First, we generalize the property testing model, one of the most widely studied models of sublinear-time algorithms, to account for the presence of adversarially erased function values. We design efficient erasure-resilient property testing algorithms for several fundamental properties of real-valued functions such as monotonicity, Lipschitz property, convexity, and linearity. We then investigate the problems of local decoding and local list decoding of codewords containing erasures. We show that, in some cases, these problems are strictly easier than the corresponding problems of decoding codewords containing errors. Moreover, we use this understanding to show a separation between our erasure-resilient property testing model and the (error) tolerant property testing model. The philosophical message of this separation is that errors occurring in large datasets are, in general, harder to deal with, than erasures. Finally, we develop models and notions to reason about algorithms that are intended to run on large graphs with missing edges. While running algorithms on large graphs containing several missing edges, it is desirable to output solutions that are close to the solutions output when there are no missing edges. With this motivation, we define average sensitivity, a robustness metric for graph algorithms. We discuss various useful features of our definition and design approximation algorithms with good average sensitivity bounds for several optimization problems on graphs. We also define a model of erasure-resilient sublinear-time graph algorithms and design an efficient algorithm for testing connectivity of graphs

    Condensed Unpredictability

    Get PDF
    We consider the task of deriving a key with high HILL entropy (i.e., being computationally indistinguishable from a key with high min-entropy) from an unpredictable source. Previous to this work, the only known way to transform unpredictability into a key that was \eps indistinguishable from having min-entropy was via pseudorandomness, for example by Goldreich-Levin (GL) hardcore bits. This approach has the inherent limitation that from a source with kk bits of unpredictability entropy one can derive a key of length (and thus HILL entropy) at most k2log(1/ϵ)k-2\log(1/\epsilon) bits. In many settings, e.g. when dealing with biometric data, such a 2log(1/ϵ)2\log(1/\epsilon) bit entropy loss in not an option. Our main technical contribution is a theorem that states that in the high entropy regime, unpredictability implies HILL entropy. Concretely, any variable KK with Kd|K|-d bits of unpredictability entropy has the same amount of so called metric entropy (against real-valued, deterministic distinguishers), which is known to imply the same amount of HILL entropy. The loss in circuit size in this argument is exponential in the entropy gap dd, and thus this result only applies for small dd (i.e., where the size of distinguishers considered is exponential in dd). To overcome the above restriction, we investigate if it\u27s possible to first ``condense\u27\u27 unpredictability entropy and make the entropy gap small. We show that any source with kk bits of unpredictability can be condensed into a source of length kk with k3k-3 bits of unpredictability entropy. Our condenser simply ``abuses the GL construction and derives a kk bit key from a source with kk bits of unpredicatibily. The original GL theorem implies nothing when extracting that many bits, but we show that in this regime, GL still behaves like a ``condenser for unpredictability. This result comes with two caveats (1) the loss in circuit size is exponential in kk and (2) we require that the source we start with has \emph{no} HILL entropy (equivalently, one can efficiently check if a guess is correct). We leave it as an intriguing open problem to overcome these restrictions or to prove they\u27re inherent

    Bit-Security Preserving Hardness Amplification

    Get PDF
    Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to 11) to very hard 1/2+ε1/2 + \varepsilon by inducing ε2\varepsilon^2 multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause the bit-security loss by the factor of log(1/ε)\log(1/\varepsilon). To resolve this issue, we derive a new variant of the XOR lemma in terms of the R\\u27enyi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the \bot symbol in addition to 00 and 11, which may be of independent interest

    Public keys quality

    Get PDF
    Dissertação de mestrado em Matemática e ComputaçãoThe RSA cryptosystem, invented by Ron Rivest, Adi Shamir and Len Adleman ([Rivest et al., 1978]) is the most commonly used cryptosystem for providing privacy and ensuring authenticity of digital data. RSA is usually used in contexts where security of digital data is priority. RSA is used worldwide by web servers and browsers to secure web traffic, to ensure privacy and authenticity of e-mail, to secure remote login sessions and to provide secure electronic creditcard payment systems. Given its importance in the protection of digital data, vulnerabilities of RSA have been analysed by many researchers. The researches made so far led to a number of fascinating attacks. Although the attacks helped to improve the security of this cryptosystem, showing that securely implementing RSA is a nontrivial task, none of them was devastating. This master thesis discusses the RSA cryptosystem and some of its vulnerabilities as well as the description of some attacks, both recent and old, together with the description of the underlying mathematical tools they use. Although many types of attacks exist, in this master thesis only a few examples were analysed. The ultimate attack, based in the batch-GCD algorithm, was implemented and tested in the RSA keys produced by a certificated Hardware Security Modules Luna SA and the results were commented. The random and pseudorandom numbers are fundamental to many cryptographic applications, including the RSA cryptosystems. In fact, the produced keys must be generated in a specific random way. The National Institute of Standards and Technology, responsible entity for specifying safety standards, provides a package named "A Statistical Test Suit for Random and Pseudorandom Number Generators for Cryptography Applications" which was used in this work to test the randomness of the Luna SA generated numbers. All the statistical tests were tested in different bit sizes number and the results commented. The main purpose of this thesis is to study the previous subjects and create an applications capable to test the Luna SA generated numbers randomness, a well as evaluate the security of the RSA. This work was developed in partnership with University of Minho and Multicert.O RSA, criado por Ron Rivest, Adi Shamir e Len Adleman ([Rivest et al., 1978]) é o sistema criptográfico mais utilizado para providenciar segurança e assegurar a autenticação de dados utilizados no mundo digital. O RSA é usualmente usado em contextos onde a segurança é a grande prioridade. Hoje em dia, este sistema criptográfico é utilizado mundialmente por servidores web e por browsers, por forma a assegurar um tráfego seguro através da Internet. É o sistema criptográfico mais utilizado na autenticação de e-mails, nos inícios de sessões remotos, na utilização de pagamentos através de cartões multibanco, garantindo segurança na utilização destes serviços. Dada a importância que este sistema assume na proteção da informação digital, as suas vulnerabilidades têm sido alvo de várias investigações. Estas investigações resultaram em vários ataques ao RSA. Embora nenhum destes ataques seja efetivamente eficaz, todos contribuíram para um aumento da segurança do RSA, uma vez que as implementações de referência deste algoritmo passaram a precaver-se contra os ataques descobertos. Esta tese de mestrado aborda o sistema criptográfico RSA, discutindo algumas das suas vulnerabilidades, assim como alguns ataques efetuados a este sistema, estudando todos os métodos matemáticos por estes usados. Embora existam diversos ataques, apenas alguns serão abordados nesta tese de mestrado. O último ataque, baseado no algoritmo batch-GCD foi implementado e foram feitos testes em chaves RSA produzidas por um Hardware Security Module Luna SA certificado e os resultados obtidos foram discutidos. Os números aleatórios e pseudoaleatórios são fundamentais a todas as aplicações criptográficas, incluindo, portanto, o sistema criptográfico RSA. De facto, as chaves produzidas deverão ser geradas com alguma aleatoriedade intrínseca ao sistema. O Instituto Nacional de Standards e Tecnologia, entidade responsável pela especificação dos standards de segurança, disponibiliza um pacote de testes estatísticos, denominado por "A Statistical Test Suit for Random and Pseudorandom Number Generators for Cryptography Applications". Estes testes estatísticos foram aplicados a números gerados pelo Luna SA e os resultados foram, também, comentados. O objetivo desta tese de mestrado é desenvolver capacidade de compreensão sobre os assuntos descritos anteriormente e criar uma aplicação capaz de testar a aleatoriedade dos números gerados pelo Luna SA, assim como avaliar a segurança do sistema criptográfico RSA. Este foi um trabalho desenvolvido em parceria com a Universidade do Minho e com a Multicert

    Uses of randomness in algorithms and protocols

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1989.Includes bibliographical references (p. 225-228).by Joe Kilian.Ph.D
    corecore