56 research outputs found

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Semantic and effective communications

    Get PDF
    Shannon and Weaver categorized communications into three levels of problems: the technical problem, which tries to answer the question "how accurately can the symbols of communication be transmitted?"; the semantic problem, which asks the question "how precisely do the transmitted symbols convey the desired meaning?"; the effectiveness problem, which strives to answer the question "how effectively does the received meaning affect conduct in the desired way?". Traditionally, communication technologies mainly addressed the technical problem, ignoring the semantics or the effectiveness problems. Recently, there has been increasing interest to address the higher level semantic and effectiveness problems, with proposals ranging from semantic to goal oriented communications. In this thesis, we propose to formulate the semantic problem as a joint source-channel coding (JSCC) problem and the effectiveness problem as a multi-agent partially observable Markov decision process (MA-POMDP). As such, for the semantic problem, we propose DeepWiVe, the first-ever end-to-end JSCC video transmission scheme that leverages the power of deep neural networks (DNNs) to directly map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform. We also further show that it is possible to use predefined constellation designs as well as secure the physical layer communication against eavesdroppers for deep learning (DL) driven JSCC schemes, making such schemes much more viable for deployment in the real world. For the effectiveness problem, we propose a novel formulation by considering multiple agents communicating over a noisy channel in order to achieve better coordination and cooperation in a multi-agent reinforcement learning (MARL) framework. Specifically, we consider a MA-POMDP, in which the agents, in addition to interacting with the environment, can also communicate with each other over a noisy communication channel. The noisy communication channel is considered explicitly as part of the dynamics of the environment, and the message each agent sends is part of the action that the agent can take. As a result, the agents learn not only to collaborate with each other but also to communicate "effectively'' over a noisy channel. Moreover, we show that this framework generalizes both the semantic and technical problems. In both instances, we show that the resultant communication scheme is superior to one where the communication is considered separately from the underlying semantic or goal of the problem.Open Acces

    Быстрое вычисление циклических сверток и их приложения в кодовых схемах асимметричного шифрования

    Get PDF
    The development of fast algorithms for key generation, encryption and decryption not only increases the efficiency of related operations. Such fast algorithms, for example, for asymmetric cryptosystems on quasi-cyclic codes, make it possible to experimentally study the dependence of decoding failure rate on code parameters for small security levels and to extrapolate these results to large values of security levels. In this article, we explore efficient cyclic convolution algorithms, specifically designed, among other things, for use in encoding and decoding algorithms for quasi-cyclic LDPC and MDPC codes. Corresponding convolutions operate on binary vectors, which can be either sparse or dense. The proposed algorithms achieve high speed by compactly storing sparse vectors, using hardware-supported XOR instructions, and replacing modulo operations with specialized loop transformations. These fast algorithms have potential applications not only in cryptography, but also in other areas where convolutions are used.Разработка быстрых алгоритмов генерации ключей, шифрования и дешифрования не только повышает эффективность соответствующих операций. Такие быстрые алгоритмы, например, для асимметричных криптосистем на квазициклических кодах, позволяют экспериментально исследовать зависимость вероятности ошибочного расшифрования от параметров кода для малых параметров безопасности и экстраполировать эти результаты на большие значения параметров безопасности. В этой статье мы исследуем эффективные алгоритмы циклической свертки, специально разработанные, в том числе, для использования в алгоритмах кодирования и декодирования квазициклических LDPC и MDPC кодов. Соответствующие свертки работают с двоичными векторами, которые могут быть как разреженными, так и плотными. Предлагаемые алгоритмы достигают высокой скорости за счет компактного хранения разреженных векторов, использования аппаратно поддерживаемых инструкций XOR и замены операций по модулю специализированными преобразованиями цикла. Эти быстрые алгоритмы имеют потенциальное применение не только в криптографии, но и в других областях, где используются свертки

    SCA-LDPC: A Code-Based Framework for Key-Recovery Side-Channel Attacks on Post-Quantum Encryption Schemes

    Get PDF
    Whereas theoretical attacks on standardized crypto primitives rarely lead to actual practical attacks, the situation is different for side-channel attacks. Improvements in the performance of side-channel attacks are of utmost importance. In this paper, we propose a framework to be used in key-recovery side-channel attacks on CCA-secure post-quantum encryption schemes. The basic idea is to construct chosen ciphertext queries to a plaintext checking oracle that collects information on a set of secret variables in a single query. Then a large number of such queries is considered, each related to a different set of secret variables, and they are modeled as a low-density parity-check code (LDPC code). Secret variables are finally determined through efficient iterative decoding methods, such as belief propagation, using soft information. The utilization of LDPC codes offers efficient decoding, source compression, and error correction benefits. It has been demonstrated that this approach provides significant improvements compared to previous work by reducing the required number of queries, such as the number of traces in a power attack. The framework is demonstrated and implemented in two different cases. On one hand, we attack implementations of HQC in a timing attack, lowering the number of required traces considerably compared to attacks in previous work. On the other hand, we describe and implement a full attack on a masked implementation of Kyber using power analysis. Using the ChipWhisperer evaluation platform, our real-world attacks recover the long-term secret key of a first-order masked implementation of Kyber-768 with an average of only 12 power traces

    Flexible Quasi-Dyadic Code-Based Public-Key Encryption and Signature

    Get PDF
    Drawback of code-based public-key cryptosystems is that their public-key size is lage. It takes some hundreds KB to some MB for typical parameters. While several attempts have been conducted to reduce it, most of them have failed except one, which is Quasi-Dyadic (QD) public-key (for large extention degrees). While an attack has been proposed on QD public-key (for small extension degrees), it can be prevented by making the extension degree mm larger, specifically by making q(m(m1))q^(m (m-1)) large enough where qq is the base filed and for a binary code, q=2q=2. The drawback of QD is, however, it must hold n<<2mtn << 2^m - t (at least n2m1n \leq 2^{m-1}) where nn and tt are the code lenght and the error correction capability of the underlying code. If it is not satisfied, its key generation fails since it is performed by trial and error. This condition also prevents QD from generating parameters for code-based digital signatures since without making nn close to 2mt2^m - t, 2mt/(nt)2^{mt}/{n \choose t} cannot be small. To overcome these problems, we propose ``Flexible\u27\u27 Quasi-Dyadic (FQD) public-key that can even achieve n=2mtn=2^m - t with one shot. Advantages of FQD include 1) it can reduce the publi-key size further, 2) it can be applied to code-based digital signatures, too

    Studies on Physical Layer Cryptography in Free Space Optical Communications

    Get PDF
    早大学位記番号:新7517早稲田大

    ADAPTIVE AND SECURE DISTRIBUTED SOURCE CODING FOR VIDEO AND IMAGE COMPRESSION

    Get PDF
    Distributed Video Coding (DVC) is rapidly gaining popularity as a low cost, robust video coding solution, that reduces video encoding complexity. DVC is built on Distributed Source Coding (DSC) principles where correlation between sources to be compressed is exploited at the decoder side. In the case of DVC, a current frame available only at the encoder is estimated at the decoder with side information (SI) generated from other frames available at the decoder. The inter-frame correlation in DVC is then explored at the decoder based on the received syndromes of Wyner-Ziv (WZ) frame and SI frame. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations.Generally, the existing correlation estimation methods in DVC can be classified into two main types: online estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms online estimation techniques with the cost of increased decoding complexity.In order to exploit the robustness of DVC code designs, I integrate particle filtering with standard belief propagation decoding for inference on one joint factor graph to estimate correlation among source and side information. Correlation estimation is performed OTF as it is carried out jointly with decoding of the graph-based DSC code. Moreover, I demonstrate our proposed scheme within state-of-the-art DVC systems, which are transform-domain based with a feedback channel for rate adaptation. Experimental results show that our proposed system gives a significant performance improvement compared to the benchmark state-of-the-art DISCOVER codec (including correlation estimation) and the case without dynamic particle filtering tracking, due to improved knowledge of timely correlation statistics via the combination of joint bit-plane decoding and particle-based BP tracking.Although sampling (e.g., particle filtering) based OTF correlation advances performances of DVC, it also introduces significant computational overhead and results in the decoding delay of DVC. Therefore, I tackle this difficulty through a low complexity adaptive DVC scheme using the deterministic approximate inference, where correlation estimation is also performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code but with much lower complexity. The proposed adaptive DVC scheme is based on expectation propagation (EP), which generally offers better tradeoff between accuracy and complexity among different deterministic approximate inference methods. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.Finally, I extend the concept of DVC (i.e., exploring inter-frames correlation at the decoder side) to the compression of biomedical imaging data (e.g., CT sequence) in a lossless setup, where each slide of a CT sequence is analogous to a frame of video sequence. Besides compression efficiency, another important concern of biomedical imaging data is the privacy and security. Ideally, biomedical data should be kept in a secure manner (i.e. encrypted).An intuitive way is to compress the encrypted biomedical data directly. Unfortunately, traditional compression algorithms (removing redundancy through exploiting the structure of data) fail to handle encrypted data. The reason is that encrypted data appear to be random and lack the structure in the original data. The "best" practice has been compressing the data before encryption, however, this is not appropriate for privacy related scenarios (e.g., biomedical application), where one wants to process data while keeping them encrypted and safe. In this dissertation, I develop a Secure Privacy-presERving Medical Image CompRessiOn (SUPERMICRO) framework based on DSC, which makes the compression of the encrypted data possible without compromising security and compression efficiency. Our approach guarantees the data transmission and storage in a privacy-preserving manner. I tested our proposed framework on two CT image sequences and compared it with the state-of-the-art JPEG 2000 lossless compression. Experimental results demonstrated that the SUPERMICRO framework provides enhanced security and privacy protection, as well as high compression performance

    Some Notes on Code-Based Cryptography

    Get PDF
    This thesis presents new cryptanalytic results in several areas of coding-based cryptography. In addition, we also investigate the possibility of using convolutional codes in code-based public-key cryptography. The first algorithm that we present is an information-set decoding algorithm, aiming towards the problem of decoding random linear codes. We apply the generalized birthday technique to information-set decoding, improving the computational complexity over previous approaches. Next, we present a new version of the McEliece public-key cryptosystem based on convolutional codes. The original construction uses Goppa codes, which is an algebraic code family admitting a well-defined code structure. In the two constructions proposed, large parts of randomly generated parity checks are used. By increasing the entropy of the generator matrix, this presumably makes structured attacks more difficult. Following this, we analyze a McEliece variant based on quasi-cylic MDPC codes. We show that when the underlying code construction has an even dimension, the system is susceptible to, what we call, a squaring attack. Our results show that the new squaring attack allows for great complexity improvements over previous attacks on this particular McEliece construction. Then, we introduce two new techniques for finding low-weight polynomial multiples. Firstly, we propose a general technique based on a reduction to the minimum-distance problem in coding, which increases the multiplicity of the low-weight codeword by extending the code. We use this algorithm to break some of the instances used by the TCHo cryptosystem. Secondly, we propose an algorithm for finding weight-4 polynomials. By using the generalized birthday technique in conjunction with increasing the multiplicity of the low-weight polynomial multiple, we obtain a much better complexity than previously known algorithms. Lastly, two new algorithms for the learning parities with noise (LPN) problem are proposed. The first one is a general algorithm, applicable to any instance of LPN. The algorithm performs favorably compared to previously known algorithms, breaking the 80-bit security of the widely used (512,1/8) instance. The second one focuses on LPN instances over a polynomial ring, when the generator polynomial is reducible. Using the algorithm, we break an 80-bit security instance of the Lapin cryptosystem
    corecore