19 research outputs found

    Links between Division Property and Other Cube Attack Variants

    Get PDF
    A theoretically reliable key-recovery attack should evaluate not only the non-randomness for the correct key guess but also the randomness for the wrong ones as well. The former has always been the main focus but the absence of the latter can also cause self-contradicted results. In fact, the theoretic discussion of wrong key guesses is overlooked in quite some existing key-recovery attacks, especially the previous cube attack variants based on pure experiments. In this paper, we draw links between the division property and several variants of the cube attack. In addition to the zero-sum property, we further prove that the bias phenomenon, the non-randomness widely utilized in dynamic cube attacks and cube testers, can also be reflected by the division property. Based on such links, we are able to provide several results: Firstly, we give a dynamic cube key-recovery attack on full Grain-128. Compared with Dinur et al.’s original one, this attack is supported by a theoretical analysis of the bias based on a more elaborate assumption. Our attack can recover 3 key bits with a complexity 297.86 and evaluated success probability 99.83%. Thus, the overall complexity for recovering full 128 key bits is 2125. Secondly, now that the bias phenomenon can be efficiently and elaborately evaluated, we further derive new secure bounds for Grain-like primitives (namely Grain-128, Grain-128a, Grain-V1, Plantlet) against both the zero-sum and bias cube testers. Our secure bounds indicate that 256 initialization rounds are not able to guarantee Grain-128 to resist bias-based cube testers. This is an efficient tool for newly designed stream ciphers for determining the number of initialization rounds. Thirdly, we improve Wang et al.’s relaxed term enumeration technique proposed in CRYPTO 2018 and extend their results on Kreyvium and ACORN by 1 and 13 rounds (reaching 892 and 763 rounds) with complexities 2121.19 and 2125.54 respectively. To our knowledge, our results are the current best key-recovery attacks on these two primitives

    Approximate Modeling of Signed Difference and Digraph based Bit Condition Deduction: New Boomerang Attacks on BLAKE

    Get PDF
    The signed difference is a powerful tool for analyzing the Addition, XOR, Rotation (ARX) cryptographic primitives. Currently, solving the accurate model for the signed difference propagation is infeasible. We propose an approximate MILP modeling method capturing the propagation rules of signed differences. Unlike the accurate signed difference model, the approximate model only focuses on active bits and ignores the possible bit conditions on inactive bits. To overcome the negative effect of a lower accuracy arising from ignoring bit conditions on inactive bits, we propose an additional tool for deducing all bit conditions automatically. Such a tool is based on a directed-graph capturing the whole computation process of ARX primitives by drawing links among intermediate words and operations. The digraph is also applicable in the MILP model construction process: it enables us to identify the parameters upper bounding the number of bit conditions so as to define the objective function; it is further used to connect the boomerang top and bottom signed differential paths by introducing proper constraints to avoid incompatible intersections. Benefiting from the approximate model and the directed-graph based tool, the solving time of the new MILP model is significantly reduced, enabling us to deduce signed differential paths efficiently and accurately. To show the utility of our method, we propose boomerang attacks on the keyed permutations of three ARX hash functions of BLAKE. For the first time we mount an attack on the full 7 rounds of BLAKE3, with the complexity as low as 21802^{180}. Our best attack on BLAKE2s can improve the previously best result by 0.5 rounds but with lower complexity. The attacks on BLAKE-256 cover the same 8 rounds with the previous best result but with complexity 2162^{16} times lower. All our results are verified practically with round-reduced boomerang quartets

    Key Filtering in Cube Attacks from the Implementation Aspect

    Get PDF
    In cube attacks, key filtering is a basic step of identifying the correct key candidates by referring to the truth tables of superpolies. When terms of superpolies get massive, the truth table lookup complexity of key filtering increases significantly. In this paper, we propose the concept of implementation dependency dividing all cube attacks into two categories: implementation dependent and implementation independent. The implementation dependent cube attacks can only be feasible when the assumption that one encryption oracle query is more complicated than one table lookup holds. On the contrary, implementation independent cube attacks remain feasible in the extreme case where encryption oracles are implemented in the full codebook manner making one encryption query equivalent to one table lookup. From this point of view, we scrutinize existing cube attack results of stream ciphers Trivium, Grain-128AEAD, Acorn and Kreyvium. As a result, many of them turn out to be implementation dependent. Combining with the degree evaluation and divide-and-conquer techniques used for superpoly recovery, we further propose new cube attack results on Kreyvium reduced to 898, 899 and 900 rounds. Such new results not only mount to the maximal number of rounds so far but also are implementation independent

    Observations on the Dynamic Cube Attack of 855-Round TRIVIUM from Crypto\u2718

    Get PDF
    Recently, another kind of dynamic cube attack is proposed by Fu et al. With some key guesses and a transformation in the output bit, they claim that, when the key guesses are correct, the degree of the transformed output bit can drop so significantly that the cubes of lower dimension can not exist, making the output bit vulnerable to the zero-sum cube tester using slightly higher dimensional cubes. They applied their method to 855-round TRIVIUM. In order to verify the correctness of their result, they even proposed a practical attack on 721-round TRIVIUM claiming that the transformed output bit after 721-rounds of initialization does not contain cubes of dimensions 31 and below. However, the degree evaluation algorithm used by Fu et al. is innovative and complicated, and its complexity is not given. Their algorithm can only be implemented on huge clusters and cannot be verified by existing theoretic tools. In this paper, we theoretically analyze the dynamic cube attack method given by Fu et al. using the division property and MILP modeling technique. Firstly, we draw links between the division property and Fu et al.\u27s dynamic cube attack so that their method can be described as a theoretically well founded and computationally economic MILP-aided division-property-based cube attack. With the MILP model drawn according to the division property, we analyzed the 721-round TRIVIUM in detail and find some interesting results: \begin​{enumerate} \item The degree evaluation using our MILP method is more accurate than that of Fu et al.\u27s. Fu et al. prove that the degree of pure z721z721 is 40 while our method gives 29. We practically proved the correctness of our method by trying thousands of random keys, random 30-dimensional cubes and random assignments to non-cube IVs finding that the summations are constantly 0. \item For the transformed output bit (1+s2901)⋅z721(1+s1290)⋅z721, we proved the same degree 31 as Fu et al. and we also find 32-dimensional cubes have zero-sum property for correct key guesses. But since the degree of pure z721z721 is only 29, the 721-round practical attack on TRIVIUM is violating the principle of Fu et al.\u27s work: after the transformation in the output bit, when the key guesses are correct, the degree of the transformed output bit has not dropped but risen. \item Now that the degree theoretic foundation of the 721-round attack has been violated, we also find out that the key-recovery attack cannot be carried out either. We theoretically proved and practically verified that no matter the key guesses are correct or incorrect, the summation over 32-dimensional cube are always 0. So, no key bit can be recovered at all. \end{enumerate} All these analysis on 721-round TRIVIUM can be verified practically and we open our C++ source code for implementation as well. Secondly, we revisit their 855-round result. Our MILP model reveal that the 855-round result suffers from the same problems with its 721-round counterpart. We provide theoretic evidence that, after their transformation, the degree of the output bit is more likely to rise rather than drop. Furthermore, since Fu \etal\u27s degree evaluation is written in an unclear manner and no complexity analysis is given, we rewrite the algorithm according to their main ideas and supplement a detailed complexity analysis. Our analysis indicates that a precise evaluation to the degree requires complexities far beyond practical reach. We also demonstrate that further abbreviation to our rewritten algorithm can result in wrong evaluation. This might be the reason why Fu \etal give such a degree evaluation. This is also an additional argument against Fu \etal\u27s dynamic cube attack method. Thirdly, the selection of Fu \etal\u27s cube dimension is also questionable. According to our experiments and existing theoretic results, there is high risk that the correct key guesses and wrong ones share the same zero-sum property using Fu \etal\u27s cube testers. As a remedy, we suggest that concrete cubes satisfying particular conditions should be identified rather than relying on the IV-degree drop hypothesis. To conclude, Fu \etal\u27s dynamic cube attack on 855-round TRIVIUM is questionable. 855-round as well as 840-and-up-round TRIVIUM should still be open for further convincible cryptanalysis

    Author Correction: Single-atom Cu anchored catalysts for photocatalytic renewable H2 production with a quantum efficiency of 56%

    Get PDF
    Correction to: Nature Communications https://doi.org/10.1038/s41467-021-27698-3, published online 10 January 2022.In Supplementary Fig. 28b in the Supplementary PDF for this article, the figure panel incorrectly read ‘345 mW/cm2’ but should have been ‘34.5 mW/cm2’.In the caption of Supplementary Fig. 20 in the Supplementary PDF for this article, the term ‘isotropic analysis’ should have read ‘isotopic analysis’.In the caption of Supplementary Fig. 21 in the Supplementary PDF for this article, the term ‘isotropic analysis’ should have read ‘isotopic analysis’.In the caption of Supplementary Fig. 28b in the Supplementary PDF for this article, the term ‘isotropic test’ should have read ‘isotopic test’

    Links between Division Property and Other Cube Attack Variants

    No full text
    A theoretically reliable key-recovery attack should evaluate not only the non-randomness for the correct key guess but also the randomness for the wrong ones as well. The former has always been the main focus but the absence of the latter can also cause self-contradicted results. In fact, the theoretic discussion of wrong key guesses is overlooked in quite some existing key-recovery attacks, especially the previous cube attack variants based on pure experiments. In this paper, we draw links between the division property and several variants of the cube attack. In addition to the zero-sum property, we further prove that the bias phenomenon, the non-randomness widely utilized in dynamic cube attacks and cube testers, can also be reflected by the division property. Based on such links, we are able to provide several results: Firstly, we give a dynamic cube key-recovery attack on full Grain-128. Compared with Dinur et al.’s original one, this attack is supported by a theoretical analysis of the bias based on a more elaborate assumption. Our attack can recover 3 key bits with a complexity 297.86 and evaluated success probability 99.83%. Thus, the overall complexity for recovering full 128 key bits is 2125. Secondly, now that the bias phenomenon can be efficiently and elaborately evaluated, we further derive new secure bounds for Grain-like primitives (namely Grain-128, Grain-128a, Grain-V1, Plantlet) against both the zero-sum and bias cube testers. Our secure bounds indicate that 256 initialization rounds are not able to guarantee Grain-128 to resist bias-based cube testers. This is an efficient tool for newly designed stream ciphers for determining the number of initialization rounds. Thirdly, we improve Wang et al.’s relaxed term enumeration technique proposed in CRYPTO 2018 and extend their results on Kreyvium and ACORN by 1 and 13 rounds (reaching 892 and 763 rounds) with complexities 2121.19 and 2125.54 respectively. To our knowledge, our results are the current best key-recovery attacks on these two primitives

    Links between Division Property and Other Cube Attack Variants

    No full text
    status: publishe

    Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly

    No full text
    status: Published onlin

    Characterization of lacustrine harmful algal blooms using multiple biomarkers: Historical processes, driving synergy, and ecological shifts

    No full text
    11 pages, 5 figures, supplementary materials https://doi.org/10.1016/j.watres.2023.119916Harmful algal blooms (HABs) producing toxic metabolites are increasingly threatening environmental and human health worldwide. Unfortunately, long-term process and mechanism triggering HABs remain largely unclear due to the scarcity of temporal monitoring. Retrospective analysis of sedimentary biomarkers using up-to-date chromatography and mass spectrometry techniques provide a potential means to reconstruct the past occurrence of HABs. By combining aliphatic hydrocarbons, photosynthetic pigments, and cyanotoxins, we quantified herein century-long changes in abundance, composition, and variability of phototrophs, particularly toxigenic algal blooms, in China's third largest freshwater Lake Taihu. Our multi-proxy limnological reconstruction revealed an abrupt ecological shift in the 1980s characterized by elevated primary production, Microcystis-dominated cyanobacterial blooms, and exponential microcystin production, in response to nutrient enrichment, climate change, and trophic cascades. The empirical results from ordination analysis and generalized additive models support climate warming and eutrophication synergy through nutrient recycling and their feedback through buoyant cyanobacterial proliferation, which sustain bloom-forming potential and further promote the occurrence of increasingly-toxic cyanotoxins (e.g., microcystin-LR) in Lake Taihu. Moreover, temporal variability of the lake ecosystem quantified using variance and rate of change metrics rose continuously after state change, indicating increased ecological vulnerability and declined resilience following blooms and warming. With the persistent legacy effects of lake eutrophication, nutrient reduction efforts mitigating toxic HABs probably be overwhelmed by climate change effects, emphasizing the need for more aggressive and integrated environmental strategiesThis study was financially supported by the National Natural Science Foundation of China (42007284, 42111530229), the National Key Research and Development Program of China (2022YFF0801101), the Natural Science Foundation of Jiangsu Province of China (BK20201099), and “Innovative and Entrepreneurial Talent Program” of Jiangsu Province (JSSCBS20211389). The author Qi Lin acknowledges the support of the Youth Scientists Group in Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences (No. 2021NIGLASCJH03)With the institutional support of the ‘Severo Ochoa Centre of Excellence’ accreditation (CEX2019-000928-S)Peer reviewe
    corecore