86 research outputs found

    Applications of cryptanalysis methods to some symmetric key primitives

    Get PDF
    Block ciphers and hash functions are important cryptographic primitives that are used to secure the exchange of critical information. With the continuous increase in computational power available to attackers, information security systems including their underlying primitives need continuous improvements. Various cryptanalysis methods are used to examine the strength and weakness of hash functions and block ciphers. In this work, we study the Lesamnta-512 and DHA-256 hash functions and the LAC authenticated encryption scheme. In particular, we study the resistance of the underlying block cipher of the Lesamnta-512 hash function against impossible differential attacks, the resistance of the DHA-256 compression function against collision attacks. We also study MAC forgery attacks against LAC. Throughout our analysis, we use different automated methods to facilitate our analysis. For the cryptanalysis of Lesamnta-512, two automated methods are studied for finding an impossible differential path with the maximum length. Using the obtained impossible differential path, impossible differential cryptanalysis of Lesamnta-512 is performed for 16 rounds. For the DHA-256 hash function, we used an algebraic method to find collisions for its 17-step reduced compression function by deriving difference equations for each step and then solving them when the conditions for collisions are imposed on these equations. For LAC, the differential behavior of the different operations of the cipher is represented into a set of linear equations. Then, a Mixed Integer Linear Programming (MILP) approach is used to find a high probability characteristic. This characteristic is then used to perform a forgery attack on LAC encryption authenticated cipher

    Preimages for Step-Reduced SHA-2

    Get PDF
    In this paper, we present a preimage attack for 42 step-reduced SHA-256 with time complexity 2251.72^{251.7} and memory requirements of order 2122^{12}. The same attack also applies to 42 step-reduced SHA-512 with time complexity 2502.32^{502.3} and memory requirements of order 2222^{22}. Our attack is meet-in-the-middle preimage attack

    Improving Data Integrity in Communication Systems by Designing a New Security Hash Algorithm

    Get PDF
    The objective of this paper is to design a new secure hash algorithm with final hash code length 512 bits. The proposed hash code algorithm based on the combination of SHA-256 algorithm -with modification in its message expansion- and MD5algorithm based on double-Davis-Mayer scheme to reduce the weakness existing in these functions.In this paper; we modified message expansion for SHA-256 in the proposed algorithm .By using MATLAB, The proposed algorithm has been simulated. And hash code for different messages is obtained using MD5, SHA-256, combination of MD5 and SHA-256 with final hash code length 265 bits algorithms and the proposed algorithm . Hash code of the proposed algorithm is different from hash code obtained by MD5, SHA-256 and combination of MD5 and SHA-256 with final hash code length 256 bits algorithms for the same messages. Avalanche test, with one bit difference and more than one bit difference, is applied toSHA-256, combination of MD5 and SHA-256 with final hash code length 256 bits and the proposed algorithm .The proposed algorithm passed avalanche test with higher probability than SHA-256   and combination of MD5 and SHA-256 with final hash code length 256 bit algorithms .The proposed algorithm is more complicated and more secure

    Approaches to Overpower Proof-of-Work Blockchains Despite Minority

    Get PDF
    Blockchain (BC) technology has been established in 2009 by Nakamoto, using the Proof-of-Work (PoW) to reach consensus in public permissionless networks (Praveen et al., 2020). Since then, several consensus algorithms were proposed to provide equal (or higher) levels of security, democracy, and scalability, yet with lower levels of energy consumption. However, Nakamoto's model (a.k.a. Bitcoin) still dominates as the most trusted model in the described sittings since alternative solutions might provide lower energy consumption and higher scalability, but they would always require deviating the system towards unrecommended centralization or lower levels of security. That is, Nakamoto's model claims to tolerate (up to) < 50% of the network being controlled by a dishonest party (minority), which cannot be realized in alternative solutions without sacrificing the full decentralization property. In this paper, we investigate this tolerance claim, and we review several approaches that can be used to undermine/overpower PoW-based BCs, even with minority. We discuss those BCs taking Bitcoin as a representative application, where needed. However, the presented approaches can be applied in any PoW-based BC. Specifically, we technically discuss how a dishonest miner in minority, can take over the network using improved Brute-forcing, AI-assisted mining, Quantum Computing, Sharding, Partial Pre-imaging, Selfish mining, among other approaches. Our review serves as a needed collective technical reference (concluding more than 100 references), for practitioners and researchers, who either seek a reliable security implementation of PoW-based BC applications, or seek a comparison of PoW-based, against other BCs, in terms of adversary tolerance

    Secure Session Framework: An Identity-based Cryptographic Key Agreement and Signature Protocol

    Get PDF
    Die vorliegende Dissertation beschäftigt sich mit der Methode der identitätsbasierten Verschlüsselung. Hierbei wird der Name oder die Identität eines Zielobjekts zum Verschlüsseln der Daten verwendet. Diese Eigenschaft macht diese Methode zu einem passenden Werkzeug für die moderne elektronische Kommunikation, da die dort verwendeten Identitäten oder Endpunktadressen weltweit eindeutig sein müssen. Das in der Arbeit entwickelte identitätsbasierte Schlüsseleinigungsprotokoll bietet Vorteile gegenüber existierenden Verfahren und eröffnet neue Möglichkeiten. Eines der Hauptmerkmale ist die komplette Unabhängigkeit der Schlüsselgeneratoren. Diese Unabhängigkeit ermöglicht es, dass verschiedene Sicherheitsdomänen ihr eigenes System aufsetzen können. Sie sind nicht mehr gezwungen, sich untereinander abzusprechen oder Geheimnisse auszutauschen. Auf Grund der Eigenschaften des Protokolls sind die Systeme trotzdem untereinander kompatibel. Dies bedeutet, dass Anwender einer Sicherheitsdomäne ohne weiteren Aufwand verschlüsselt mit Anwendern einer anderen Sicherheitsdomäne kommunizieren können. Die Unabhängigkeit wurde ebenfalls auf ein Signatur-Protokoll übertragen. Es ermöglicht, dass Benutzer verschiedener Sicherheitsdomänen ein Objekt signieren können, wobei auch der Vorgang des Signierens unabhängig sein kann. Neben dem Protokoll wurde in der Arbeit auch die Analyse von bestehenden Systemen durchgeführt. Es wurden Angriffe auf etablierte Protokolle und Vermutungen gefunden, die aufzeigen, ob oder in welchen Situationen diese nicht verwendet werden sollten. Dabei wurde zum einen eine komplett neue Herangehensweise gefunden, die auf der (Un-)Definiertheit von bestimmten Objekten in diskreten Räumen basiert. Zum anderen wurde die bekannte Analysemethode der Gitterreduktion benutzt und erfolgreich auf neue Bereiche übertragen. Schlussendlich werden in der Arbeit Anwendungsszenarien für das Protokoll vorgestellt, in denen dessen Vorteile besonders relevant sind. Das erste Szenario bezieht sich auf Telefonie, wobei die Telefonnummer einer Zielperson als Schlüssel verwendet. Sowohl GSM-Telefonie als auch VoIP-Telefonie werden in der Arbeit untersucht. Dafür wurden Implementierungen auf einem aktuellen Mobiltelefon durchgeführt und bestehende VoIP-Software erweitert. Das zweite Anwendungsbeispielsind IP-Netzwerke. Auch die Benutzung der IP-Adresse eines Rechners als Schlüssel ist ein gutes Beispiel, jedoch treten hier mehr Schwierigkeiten auf als bei der Telefonie. Es gibt beispielsweise dynamische IP-Adressen oder die Methode der textit{Network Address Translation}, bei der die IP-Adresse ersetzt wird. Diese und weitere Probleme wurden identifiziert und jeweils Lösungen erarbeitet

    Cryptanalysis of Some AES-based Cryptographic Primitives

    Get PDF
    Current information security systems rely heavily on symmetric key cryptographic primitives as one of their basic building blocks. In order to boost the efficiency of the security systems, designers of the underlying primitives often tend to avoid the use of provably secure designs. In fact, they adopt ad hoc designs with claimed security assumptions in the hope that they resist known cryptanalytic attacks. Accordingly, the security evaluation of such primitives continually remains an open field. In this thesis, we analyze the security of two cryptographic hash functions and one block cipher. We primarily focus on the recent AES-based designs used in the new Russian Federation cryptographic hashing and encryption suite GOST because the majority of our work was carried out during the open research competition run by the Russian standardization body TC26 for the analysis of their new cryptographic hash function Streebog. Although, there exist security proofs for the resistance of AES- based primitives against standard differential and linear attacks, other cryptanalytic techniques such as integral, rebound, and meet-in-the-middle attacks have proven to be effective. The results presented in this thesis can be summarized as follows: Initially, we analyze various security aspects of the Russian cryptographic hash function GOST R 34.11-2012, also known as Streebog or Stribog. In particular, our work investigates five security aspects of Streebog. Firstly, we present a collision analysis of the compression function and its in- ternal cipher in the form of a series of modified rebound attacks. Secondly, we propose an integral distinguisher for the 7- and 8-round compression function. Thirdly, we investigate the one wayness of Streebog with respect to two approaches of the meet-in-the-middle attack, where we present a preimage analysis of the compression function and combine the results with a multicollision attack to generate a preimage of the hash function output. Fourthly, we investigate Streebog in the context of malicious hashing and by utilizing a carefully tailored differential path, we present a backdoored version of the hash function where collisions can be generated with practical complexity. Lastly, we propose a fault analysis attack which retrieves the inputs of the compression function and utilize it to recover the secret key when Streebog is used in the keyed simple prefix and secret-IV MACs, HMAC, or NMAC. All the presented results are on reduced round variants of the function except for our analysis of the malicious version of Streebog and our fault analysis attack where both attacks cover the full round hash function. Next, we examine the preimage resistance of the AES-based Maelstrom-0 hash function which is designed to be a lightweight alternative to the ISO standardized hash function Whirlpool. One of the distinguishing features of the Maelstrom-0 design is the proposal of a new chaining construction called 3CM which is based on the 3C/3C+ family. In our analysis, we employ a 4-stage approach that uses a modified technique to defeat the 3CM chaining construction and generates preimages of the 6-round reduced Maelstrom-0 hash function. Finally, we provide a key recovery attack on the new Russian encryption standard GOST R 34.12- 2015, also known as Kuznyechik. Although Kuznyechik adopts an AES-based design, it exhibits a faster diffusion rate as it employs an optimal diffusion transformation. In our analysis, we propose a meet-in-the-middle attack using the idea of efficient differential enumeration where we construct a three round distinguisher and consequently are able to recover 16-bytes of the master key of the reduced 5-round cipher. We also present partial sequence matching, by which we generate, store, and match parts of the compared parameters while maintaining negligible probability of matching error, thus the overall online time complexity of the attack is reduced

    Approximate Modeling of Signed Difference and Digraph based Bit Condition Deduction: New Boomerang Attacks on BLAKE

    Get PDF
    The signed difference is a powerful tool for analyzing the Addition, XOR, Rotation (ARX) cryptographic primitives. Currently, solving the accurate model for the signed difference propagation is infeasible. We propose an approximate MILP modeling method capturing the propagation rules of signed differences. Unlike the accurate signed difference model, the approximate model only focuses on active bits and ignores the possible bit conditions on inactive bits. To overcome the negative effect of a lower accuracy arising from ignoring bit conditions on inactive bits, we propose an additional tool for deducing all bit conditions automatically. Such a tool is based on a directed-graph capturing the whole computation process of ARX primitives by drawing links among intermediate words and operations. The digraph is also applicable in the MILP model construction process: it enables us to identify the parameters upper bounding the number of bit conditions so as to define the objective function; it is further used to connect the boomerang top and bottom signed differential paths by introducing proper constraints to avoid incompatible intersections. Benefiting from the approximate model and the directed-graph based tool, the solving time of the new MILP model is significantly reduced, enabling us to deduce signed differential paths efficiently and accurately. To show the utility of our method, we propose boomerang attacks on the keyed permutations of three ARX hash functions of BLAKE. For the first time we mount an attack on the full 7 rounds of BLAKE3, with the complexity as low as 21802^{180}. Our best attack on BLAKE2s can improve the previously best result by 0.5 rounds but with lower complexity. The attacks on BLAKE-256 cover the same 8 rounds with the previous best result but with complexity 2162^{16} times lower. All our results are verified practically with round-reduced boomerang quartets

    Learning reliable representations when proxy objectives fail

    Get PDF
    Representation learning involves using an objective to learn a mapping from data space to a representation space. When the downstream task for which a mapping must be learned is unknown, or is too costly to cast as an objective, we must rely on proxy objectives for learning. In this Thesis I focus on representation learning for images, and address three cases where proxy objectives fail to produce a mapping that performs well on the downstream tasks. When learning neural network mappings from image space to a discrete hash space for fast content-based image retrieval, a proxy objective is needed which captures the requirement for relevant responses to be nearer to the hash of any query than irrelevant ones. At the same time, it is important to ensure an even distribution of image hashes across the whole hash space for efficient information use and high discrimination. Proxy objectives fail when they do not meet these requirements. I propose composing hash codes in two parts. First a standard classifier is used to predict class labels that are converted to a binary representation for state-of-the-art performance on the image retrieval task. Second, a binary deep decision tree layer (DDTL) is used to model further intra-class differences and produce approximately evenly distributed hash codes. The DDTL requires no discretisation during learning and produces hash codes that enable better discrimination between data in the same class when compared to previous methods, while remaining robust to real-world augmentations in the data space. In the scenario where we require a neural network to partition the data into clusters that correspond well with ground-truth labels, a proxy objective is needed to define how these clusters are formed. One such proxy objective involves maximising the mutual information between cluster assignments made by a neural network from multiple views. In this context, views are different augmentations of the same image and the cluster assignments are the representations computed by a neural network. I demonstrate that this proxy objective produces parameters for the neural network that are sub-optimal in that a better set of parameters can be found using the same objective and a different training method. I introduce deep hierarchical object grouping (DHOG) as a method to learn a hierarchy (in the sense of easy-to-hard orderings, not structure) of solutions to the proxy objective and show how this improves performance on the downstream task. When there are features in the training data from which it is easier to compute class predictions (e.g., background colour), when compared to features for which it is relatively more difficult to compute class predictions (e.g., digit type), standard classification objectives (e.g., cross-entropy) fail to produce robust classifiers. The problem is that if a model learns to rely on `easy' features it will also ignore `complex' features (easy versus complex are purely relative in this case). I introduce latent adversarial debiasing (LAD) to decouple easy features from the class labels by first modelling the underlying structure of the training data as a latent representation using a vector-quantised variational autoencoder, and then I use a gradient-based procedure to adjust the features in this representation to confuse the predictions of a constrained classifier trained to predict class labels from the same representation. The adjusted representations of the data are then decoded to produce an augmented training dataset that can be used for training in a standard manner. I show in the aforementioned scenarios that proxy objectives can fail and demonstrate that alternative approaches can mitigate against the associated failures. I suggest an analytic approach to understanding the limits of proxy objectives for every use case in order to make the adjustments to the data or the objectives and ensure good performance on downstream tasks

    Context-Aware Privacy Protection Framework for Wireless Sensor Networks

    Get PDF
    corecore