15 research outputs found

    On hom-algebras with surjective twisting

    Get PDF
    A hom-associative structure is a set AA together with a binary operation ⋆\star and a selfmap α\alpha such that an α\alpha-twisted version of associativity is fulfilled. In this paper, we assume that α\alpha is surjective. We show that in this case, under surprisingly weak additional conditions on the multiplication, the binary operation is a twisted version of an associative operation. As an application, an earlier result by Yael Fregier and the author on weakly unital hom-algebras is recovered with a different proof. In the second section, consequences for the deformation theory of hom-algebras with surjective twisting map are discussed.Comment: 13 pages. Final version submitted for publicatio

    Improving Attacks on Round-Reduced Speck32/64 using Deep Learning

    Get PDF
    This paper has four main contributions. First, we calculate the predicted difference distribution of Speck32/64 with one specific input difference under the Markov assumption completely for up to eight rounds and verify that this yields a globally fairly good model of the difference distribution of Speck32/64. Secondly, we show that contrary to conventional wisdom, machine learning can produce very powerful cryptographic distinguishers: for instance, in a simple low-data, chosen plaintext attack on nine rounds of Speck, we present distinguishers based on deep residual neural networks that achieve a mean key rank roughly five times lower than an analogous classical distinguisher using the full difference distribution table. Thirdly, we develop a highly selective key search policy based on a variant of Bayesian optimization which, together with our neural distinguishers, can be used to reduce the remaining security of 11-round Speck32/64 to roughly 38 bits. This is a significant improvement over previous literature. Lastly, we show that our neural distinguishers successfully use features of the ciphertext pair distribution that are invisible to all purely differential distinguishers even given unlimited data. While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis

    Brute Force Cryptanalysis

    Get PDF
    The topic of this contribution is the cryptanalytic use of spurious keys, i.e. non-target keys returned by exhaustive key search. We show that the counting of spurious keys allows the construction of distinguishing attacks against block ciphers that are generically expected to start working at (marginally) lower computational cost than is required to find the target key by exhaustive search. We further show that if a brute force distinguisher does return a strong distinguishing signal, fairly generic optimizations to random key sampling will in many circumstances render the cost of detecting the signal massively lower than the cost of exhaustive search. We then use our techniques to quantitatively characterize various non-Markov properties of round-reduced Speck32/64. We fully compute, for the first time, the ciphertext pair distribution of 3-round Speck32/64 with one input difference Δ\Delta without any approximations and show that it differs markedly from Markov model predictions; we design a perfect distinguisher for the output distribution induced by the same input difference for 5-round Speck32/64 that is efficient enough to process millions of samples on an ordinary PC in a few days; we design a generic two-block known-plaintext distinguisher against Speck32/64 and show that it achieves 58 percent accuracy against 5-round Speck, equivalent e.g. to a linear distinguisher with ≈50\approx 50 percent bias. Turning our attention back to differential cryptanalysis, we show that our known-plaintext distinguisher automatically handles the 5-round output distribution induced by input difference Δ\Delta as well as the perfect differential distinguisher, but that no significant additional signal is obtained from knowing the plaintexts. We then apply the known-plaintext brute force distinguisher to 7-round Speck32/64 with fixed input difference Δ\Delta, finding that it achieves essentially the same distinguishing advantage as state-of-the-art techniques (neural networks with key averaging). We also show that our techniques can precisely characterize non-Markov properties in longer differential trails for Speck32/64

    An Assessment of Differential-Neural Distinguishers

    Get PDF
    Since the introduction of differential-neural cryptanalysis, as the machine learning assisted differential cryptanalysis proposed in [Goh19] is coined by now, a lot of followup works have been published, showing the applicability for a wide variety of ciphers. In this work, we set out to vet a multitude of differential-neural distinguishers presented so far, and additionally provide general insights. Firstly, we show for a selection of different ciphers how differential-neural distinguishers for those ciphers can be (automatically) optimized, also providing guidance to do so for other ciphers as well. Secondly, we explore a correlation between a differential-neural distinguisher\u27s accuracy and a standard notion of difference between the two underlying distributions. Furthermore, we show that for a whole (practically relevant) class of ciphers, the differential-neural distinguisher can use differential features only. At last, we also rectify a common mistake in current literature, and show that, making use of an idea already presented in the foundational work[Goh19], the claimed improvements from using multiple ciphertext-pairs at once are at most marginal, if not non-existent

    Breaking Masked Implementations of the Clyde-Cipher by Means of Side-Channel Analysis

    Get PDF
    In this paper we present our solution to the CHES Challenge 2020, the task of which it was to break masked hardware respective software implementations of the lightweight cipher Clyde by means of side-channel analysis. We target the secret cipher state after processing of the first S-box layer. Using the provided trace data we obtain a strongly biased posterior distribution for the secret-shared cipher state at the targeted point; this enables us to see exploitable biases even before the secret sharing based masking. These biases on the unshared state can be evaluated one S-box at a time and combined across traces, which enables us to recover likely key hypotheses S-box by S-box. In order to see the shared cipher state, we employ a deep neural network similar to the one used by Gohr, Jacob and Schindler to solve the CHES 2018 AES challenge. We modify their architecture to predict the exact bit sequence of the secret-shared cipher state. We find that convergence of training on this task is unsatisfying with the standard encoding of the shared cipher state and therefore introduce a different encoding of the prediction target, which we call the scattershot encoding. In order to further investigate how exactly the scattershot encoding helps to solve the task at hand, we construct a simple synthetic task where convergence problems very similar to those we observed in our side-channel task appear with the naive target data encoding but disappear with the scattershot encoding. We complete our analysis by showing results that we obtained with a “classical” method (as opposed to an AI-based method), namely the stochastic approach, that we generalize for this purpose first to the setting of shared keys. We show that the neural network draws on a much broader set of features, which may partially explain why the neural-network based approach massively outperforms the stochastic approach. On the other hand, the stochastic approach provides insights into properties of the implementation, in particular the observation that the S-boxes behave very different regarding the easiness respective hardness of their prediction

    Subsampling and Knowledge Distillation On Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations

    Get PDF
    This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint by the current authors (e-print report 2019/094) and later as a contribution to a preprint write-up of the solutions by the three winning teams (e-print report 2019/860). Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge (with an expected success rate of roughly 70 percent if about 100 CPU hours are allowed for the equation solving stage of the attack). This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution. Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model. Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist. The experiments described in this paper can be reproduced using code available at https://github.com/agohr/ches2018

    CHES 2018 Side Channel Contest CTF - Solution of the AES Challenges

    Get PDF
    Alongside CHES 2018 the side channel contest \u27Deep learning vs. classic profiling\u27 was held. Our team won both AES challenges (masked AES implementation), working under the handle AGSJWS. Here we describe and analyse our attack. We can solve the more difficult of the two challenges with 22 to 55 power traces, which is much less than was available in the contest. Our attack combines techniques from machine learning with classical techniques. The attack was superior to all classical and deep learning based attacks which we have tried. Moreover, it provides some insights on the implementation

    On noncommutative deformations, cohomology of color-commutative algebras and formal smoothness

    Get PDF
    The main topic under study in the present work is the deformation theory of color algebras. Color algebras are generalized analogues of associative superalgebras, where the underlying grading can be over an arbitrary abelian group and the Koszul sign is replaced by a bicharacter from the group into the base ring. A special case of particular interest are color-commutative algebras, which satisfy a commutation identity similar to (but much more general than) supercommutative algebras. Examples of color-commutative algebras include commutative and supercommutative superalgebras, the quaternions and para-quaternions, full matrix algebras over suitable base rings, Clifford algebras, and group rings over certain nonabelian groups. In the present work, Gerstenhaber-type formal deformations of these algebras are studied. In doing so, we extend previous work by Scheunert and provide a different approach to noncommutative deformation theory as introduced by Pinczon and Nadaud. In preparation of developing deformation theory for color algebras, we adapt a number of tools from ungraded Hochschild theory to our setting: among them, we derive an adapted Ext-functor, a color Gerstenhaber bracket, twisted graded versions of pre-Lie-algebras and pre-Lie-systems and colored analogs of the classical results linking infinitesimal deformations and obstructions to extension of deformations to second and third Hochschild cohomology. Additionally, we discuss the impact of some decisions in the construction of the trivial deformation object (color power series rings of given degree) on the resulting deformation theory. Finally, color-commutative deformations of color-commutative algebras are discussed and a suitable version of Harrison cohomology is developed. Also, the problem of classifying the color-commutative structures compatible with a given ungraded algebra is discussed and one nontrivial example is studied in detail. In support of all of these efforts, a number of structure theorems about color-commutative algebras are shown
    corecore