2,756 research outputs found

    On Selection of Samples in Algebraic Attacks and a New Technique to Find Hidden Low Degree Equations

    Get PDF
    The best way of selecting samples in algebraic attacks against block ciphers is not well explored and understood. We introduce a simple strategy for selecting the plaintexts and demonstrate its strength by breaking reduced-round KATAN, LBLOCK and SIMON. For each case, we present a practical attack on reduced round version which outperforms previous attempts of algebraic cryptanalysis whose complexities were close to exhaustive search. The attack is based on the selection of samples using cube attack and ELIMLIN which was presented at FSE'12, and a new technique called proning. In the case of LBLOCK, we break 10 out of 32 rounds. In KATAN, we break 78 out of 254 rounds. Unlike previous attempts which break smaller number of rounds, we do not guess any bit of the key and we only use structural properties of the cipher to be able to break a higher number of rounds with much lower complexity. We show that cube attacks owe their success to the same properties and therefore, can be used as a heuristic for selecting the samples in an algebraic attack. The performance of ELIMLIN is further enhanced by the new proning technique, which allows to discover linear equations that are not found by ELIMLIN

    On Selection of Samples in Algebraic Attacks and a New Technique to Find Hidden Low Degree Equations

    Get PDF
    The best way of selecting samples in algebraic attacks against block ciphers is not well explored and understood. We introduce a simple strategy for selecting the plaintexts and demonstrate its strength by breaking reduced-round KATAN32 and LBlock. In both cases, we present a practical attack which outperforms previous attempts of algebraic cryptanalysis whose complexities were close to exhaustive search. The attack is based on the selection of samples using cube attack and ElimLin which was presented at FSE’12, and a new technique called Universal Proning. In the case of LBlock, we break 10 out of 32 rounds. In KATAN32, we break 78 out of 254 rounds. Unlike previous attempts which break smaller number of rounds, we do not guess any bit of the key and we only use structural properties of the cipher to be able to break a higher number of rounds with much lower complexity. We show that cube attacks owe their success to the same properties and therefore, can be used as a heuristic for selecting the samples in an algebraic attack. The performance of ElimLin is further enhanced by the new Universal Proning technique, which allows to discover linear equations that are not found by ElimLin

    Algebraic Cryptanalysis of Deterministic Symmetric Encryption

    Get PDF
    Deterministic symmetric encryption is widely used in many cryptographic applications. The security of deterministic block and stream ciphers is evaluated using cryptanalysis. Cryptanalysis is divided into two main categories: statistical cryptanalysis and algebraic cryptanalysis. Statistical cryptanalysis is a powerful tool for evaluating the security but it often requires a large number of plaintext/ciphertext pairs which is not always available in real life scenario. Algebraic cryptanalysis requires a smaller number of plaintext/ciphertext pairs but the attacks are often underestimated compared to statistical methods. In algebraic cryptanalysis, we consider a polynomial system representing the cipher and a solution of this system reveals the secret key used in the encryption. The contribution of this thesis is twofold. Firstly, we evaluate the performance of existing algebraic techniques with respect to number of plaintext/ciphertext pairs and their selection. We introduce a new strategy for selection of samples. We build this strategy based on cube attacks, which is a well-known technique in algebraic cryptanalysis. We use cube attacks as a fast heuristic to determine sets of plaintexts for which standard algebraic methods, such as Groebner basis techniques or SAT solvers, are more efficient. Secondly, we develop a~new technique for algebraic cryptanalysis which allows us to speed-up existing Groebner basis techniques. This is achieved by efficient finding special polynomials called mutants. Using these mutants in Groebner basis computations and SAT solvers reduces the computational cost to solve the system. Hence, both our methods are designed as tools for building polynomial system representing a cipher. Both tools can be combined and they lead to a significant speedup, even for very simple algebraic solvers

    Enhancing Electromagnetic Side-Channel Analysis in an Operational Environment

    Get PDF
    Side-channel attacks exploit the unintentional emissions from cryptographic devices to determine the secret encryption key. This research identifies methods to make attacks demonstrated in an academic environment more operationally relevant. Algebraic cryptanalysis is used to reconcile redundant information extracted from side-channel attacks on the AES key schedule. A novel thresholding technique is used to select key byte guesses for a satisfiability solver resulting in a 97.5% success rate despite failing for 100% of attacks using standard methods. Two techniques are developed to compensate for differences in emissions from training and test devices dramatically improving the effectiveness of cross device template attacks. Mean and variance normalization improves same part number attack success rates from 65.1% to 100%, and increases the number of locations an attack can be performed by 226%. When normalization is combined with a novel technique to identify and filter signals in collected traces not related to the encryption operation, the number of traces required to perform a successful attack is reduced by 85.8% on average. Finally, software-defined radios are shown to be an effective low-cost method for collecting side-channel emissions in real-time, eliminating the need to modify or profile the target encryption device to gain precise timing information

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    New Baselines for Local Pseudorandom Number Generators by Field Extensions

    Get PDF
    We will revisit recent techniques and results on the cryptoanalysis of local pseudorandom number generators (PRGs). By doing so, we will achieve a new attack on PRGs whose time complexity only depends on the algebraic degree of the PRG. Concretely, for PRGs F:{0,1}n{0,1}n1+eF : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}, we will give an algebraic algorithm that distinguishes between random points and image points of FF, whose time complexity is bounded by exp(O(log(n)degF/(degF1)n1e/(degF1)))\exp(O(\log(n)^{\deg F /(\deg F - 1)} \cdot n^{1-e/(\deg F -1)} )) and whose advantage is at least 1o(1)1 - o(1) in the worst case. To the best of the author\u27s knowledge, this attack outperforms current attacks on the pseudorandomness of local random functions with guaranteed noticeable advantage and gives a new baseline algorithm for local PRGs. Furthermore, this is the first subexponential attack that is applicable to polynomial PRGs of constant degree over fields of any size with a guaranteed noticeable advantage. We will extend this distinguishing attack further to achieve a search algorithm that can invert a uniformly random constant-degree map F:{0,1}n{0,1}n1+eF : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}} with high advantage in the average case. This algorithm has the same runtime complexity as the distinguishing algorithm

    Adaptive learning and cryptography

    Get PDF
    Significant links exist between cryptography and computational learning theory. Cryptographic functions are the usual method of demonstrating significant intractability results in computational learning theory as they can demonstrate that certain problems are hard in a representation independent sense. On the other hand, hard learning problems have been used to create efficient cryptographic protocols such as authentication schemes, pseudo-random permutations and functions, and even public key encryption schemes.;Learning theory / coding theory also impacts cryptography in that it enables cryptographic primitives to deal with the issues of noise or bias in their inputs. Several different constructions of fuzzy primitives exist, a fuzzy primitive being a primitive which functions correctly even in the presence of noisy , or non-uniform inputs. Some examples of these primitives include error-correcting blockciphers, fuzzy identity based cryptosystems, fuzzy extractors and fuzzy sketches. Error correcting blockciphers combine both encryption and error correction in a single function which results in increased efficiency. Fuzzy identity based encryption allows the decryption of any ciphertext that was encrypted under a close enough identity. Fuzzy extractors and sketches are methods of reliably (re)-producing a uniformly random secret key given an imperfectly reproducible string from a biased source, through a public string that is called the sketch .;While hard learning problems have many qualities which make them useful in constructing cryptographic protocols, such as their inherent error tolerance and simple algebraic structure, it is often difficult to utilize them to construct very secure protocols due to assumptions they make on the learning algorithm. Due to these assumptions, the resulting protocols often do not have security against various types of adaptive adversaries. to help deal with this issue, we further examine the inter-relationships between cryptography and learning theory by introducing the concept of adaptive learning . Adaptive learning is a rather weak form of learning in which the learner is not expected to closely approximate the concept function in its entirety, rather it is only expected to answer a query of the learner\u27s choice about the target. Adaptive learning allows for a much weaker learner than in the standard model, while maintaining the the positive properties of many learning problems in the standard model, a fact which we feel makes problems that are hard to adaptively learn more useful than standard model learning problems in the design of cryptographic protocols. We argue that learning parity with noise is hard to do adaptively and use that assumption to construct a related key secure, efficient MAC as well as an efficient authentication scheme. In addition we examine the security properties of fuzzy sketches and extractors and demonstrate how these properties can be combined by using our related key secure MAC. We go on to demonstrate that our extractor can allow a form of related-key hardening for protocols in that, by affecting how the key for a primitive is stored it renders that protocol immune to related key attacks

    LPN-based Attacks in the White-box Setting

    Get PDF
    In white-box cryptography, early protection techniques have fallen to the automated Differential Computation Analysis attack (DCA), leading to new countermeasures and attacks. A standard side-channel countermeasure, Ishai-Sahai-Wagner\u27s masking scheme (ISW, CRYPTO 2003) prevents Differential Computation Analysis but was shown to be vulnerable in the white-box context to the Linear Decoding Analysis attack (LDA). However, recent quadratic and cubic masking schemes by Biryukov-Udovenko (ASIACRYPT 2018) and Seker-Eisenbarth-Liskiewicz (CHES 2021) prevent LDA and force to use its higher-degree generalizations with much higher complexity. In this work, we study the relationship between the security of these and related schemes to the Learning Parity with Noise (LPN) problem and propose a new automated attack by applying an LPN-solving algorithm to white-box implementations. The attack effectively exploits strong linear approximations of the masking scheme and thus can be seen as a combination of the DCA and LDA techniques. Different from previous attacks, the complexity of this algorithm depends on the approximation error, henceforth allowing new practical attacks on masking schemes that previously resisted automated analysis. We demonstrate it theoretically and experimentally, exposing multiple cases where the LPN-based method significantly outperforms LDA and DCA methods, including their higher-order variants. This work applies the LPN problem beyond its usual post-quantum cryptography boundary, strengthening its interest in the cryptographic community, while expanding the range of automated attacks by presenting a new direction for breaking masking schemes in the white-box model

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore