2,239 research outputs found

    Towards Finding the Best Characteristics of Some Bit-oriented Block Ciphers and Automatic Enumeration of (Related-key) Differential and Linear Characteristics with Predefined Properties

    Get PDF
    In this paper, we investigate the Mixed-integer Linear Programming (MILP) modelling of the differential and linear behavior of a wide range of block ciphers. We point out that the differential behavior of an arbitrary S-box can be exactly described by a small system of linear inequalities. ~~~~~Based on this observation and MILP technique, we propose an automatic method for finding high probability (related-key) differential or linear characteristics of block ciphers. Compared with Sun {\it et al.}\u27s {\it heuristic} method presented in Asiacrypt 2014, the new method is {\it exact} for most ciphers in the sense that every feasible 0-1 solution of the MILP model generated by the new method corresponds to a valid characteristic, and therefore there is no need to repeatedly add valid cutting-off inequalities into the MILP model as is done in Sun {\it et al.}\u27s method; the new method is more powerful which allows us to get the {\it exact lower bounds} of the number of differentially or linearly active S-boxes; and the new method is more efficient which allows to obtain characteristic with higher probability or covering more rounds of a cipher (sometimes with less computational effort). ~~~~~Further, by encoding the probability information of the differentials of an S-boxes into its differential patterns, we present a novel MILP modelling technique which can be used to search for the characteristics with the maximal probability, rather than the characteristics with the smallest number of active S-boxes. With this technique, we are able to get tighter security bounds and find better characteristics. ~~~~~Moreover, by employing a type of specially constructed linear inequalities which can remove {\it exactly one} feasible 0-1 solution from the feasible region of an MILP problem, we propose a method for automatic enumeration of {\it all} (related-key) differential or linear characteristics with some predefined properties, {\it e.g.}, characteristics with given input or/and output difference/mask, or with a limited number of active S-boxes. Such a method is very useful in the automatic (related-key) differential analysis, truncated (related-key) differential analysis, linear hull analysis, and the automatic construction of (related-key) boomerang/rectangle distinguishers. ~~~~~The methods presented in this paper are very simple and straightforward, based on which we implement a Python framework for automatic cryptanalysis, and extensive experiments are performed using this framework. To demonstrate the usefulness of these methods, we apply them to SIMON, PRESENT, Serpent, LBlock, DESL, and we obtain some improved cryptanalytic results

    Links between Division Property and Other Cube Attack Variants

    Get PDF
    A theoretically reliable key-recovery attack should evaluate not only the non-randomness for the correct key guess but also the randomness for the wrong ones as well. The former has always been the main focus but the absence of the latter can also cause self-contradicted results. In fact, the theoretic discussion of wrong key guesses is overlooked in quite some existing key-recovery attacks, especially the previous cube attack variants based on pure experiments. In this paper, we draw links between the division property and several variants of the cube attack. In addition to the zero-sum property, we further prove that the bias phenomenon, the non-randomness widely utilized in dynamic cube attacks and cube testers, can also be reflected by the division property. Based on such links, we are able to provide several results: Firstly, we give a dynamic cube key-recovery attack on full Grain-128. Compared with Dinur et al.โ€™s original one, this attack is supported by a theoretical analysis of the bias based on a more elaborate assumption. Our attack can recover 3 key bits with a complexity 297.86 and evaluated success probability 99.83%. Thus, the overall complexity for recovering full 128 key bits is 2125. Secondly, now that the bias phenomenon can be efficiently and elaborately evaluated, we further derive new secure bounds for Grain-like primitives (namely Grain-128, Grain-128a, Grain-V1, Plantlet) against both the zero-sum and bias cube testers. Our secure bounds indicate that 256 initialization rounds are not able to guarantee Grain-128 to resist bias-based cube testers. This is an efficient tool for newly designed stream ciphers for determining the number of initialization rounds. Thirdly, we improve Wang et al.โ€™s relaxed term enumeration technique proposed in CRYPTO 2018 and extend their results on Kreyvium and ACORN by 1 and 13 rounds (reaching 892 and 763 rounds) with complexities 2121.19 and 2125.54 respectively. To our knowledge, our results are the current best key-recovery attacks on these two primitives

    Mixed Integer Programming Models for Finite Automaton and Its Application to Additive Differential Patterns of Exclusive-Or

    Get PDF
    Inspired by Fu et al. work on modeling the exclusive-or differential property of the modulo addition as an mixed-integer programming problem, we propose a method with which any finite automaton can be formulated as an mixed-integer programming model. Using this method, we show how to construct a mixed integer programming model whose feasible region is the set of all differential patterns (ฮฑ,ฮฒ,ฮณ)(\alpha, \beta, \gamma)\u27s, such that adpโŠ•(ฮฑ,ฮฒโ†’ฮณ)=Prx,y[((x+ฮฑ)โŠ•(y+ฮฒ))โˆ’(xโŠ•y)=ฮณ]>0{\rm adp}^\oplus(\alpha, \beta \rightarrow \gamma) = {\rm Pr}_{x,y}[((x + \alpha) \oplus (y + \beta))-(x \oplus y) = \gamma] > 0. We expect that this may be useful in automatic differential analysis with additive difference

    Algorithms for massively parallel generic hp-adaptive finite element methods

    Get PDF
    Efficient algorithms for the numerical solution of partial differential equations are required to solve problems on an economically viable timescale. In general, this is achieved by adapting the resolution of the discretization to the investigated problem, as well as exploiting hardware specifications. For the latter category, parallelization plays a major role for modern multi-core and multi-node architectures, especially in the context of high-performance computing. Using finite element methods, solutions are approximated by discretizing the function space of the problem with piecewise polynomials. With hp-adaptive methods, the polynomial degrees of these basis functions may vary on locally refined meshes. We present algorithms and data structures required for generic hp-adaptive finite element software applicable for both continuous and discontinuous Galerkin methods on distributed memory systems. Both function space and mesh may be adapted dynamically during the solution process. We cover details concerning the unique enumeration of degrees of freedom with continuous Galerkin methods, the communication of variable size data, and load balancing. Furthermore, we present strategies to determine the type of adaptation based on error estimation and prediction as well as smoothness estimation via the decay rate of coefficients of Fourier and Legendre series expansions. Both refinement and coarsening are considered. A reference implementation in the open-source library deal.II is provided and applied to the Laplace problem on a domain with a reentrant corner which invokes a singularity. With this example, we demonstrate the benefits of the hp-adaptive methods in terms of error convergence and show that our algorithm scales up to 49,152 MPI processes.Fรผr die numerische Lรถsung partieller Differentialgleichungen sind effiziente Algorithmen erforderlich, um Probleme auf einer wirtschaftlich tragbaren Zeitskala zu lรถsen. Im Allgemeinen ist dies durch die Anpassung der Diskretisierungsauflรถsung an das untersuchte Problem sowie durch die Ausnutzung der Hardwarespezifikationen mรถglich. Fรผr die letztere Kategorie spielt die Parallelisierung eine groรŸe Rolle fรผr moderne Mehrkern- und Mehrknotenarchitekturen, insbesondere im Kontext des Hochleistungsrechnens. Mit Hilfe von Finite-Elemente-Methoden werden Lรถsungen durch Diskretisierung des assoziierten Funktionsraums mit stรผckweisen Polynomen approximiert. Bei hp-adaptiven Verfahren kรถnnen die Polynomgrade dieser Basisfunktionen auf lokal verfeinerten Gittern variieren. In dieser Dissertation werden Algorithmen und Datenstrukturen vorgestellt, die fรผr generische hp-adaptive Finite-Elemente-Software benรถtigt werden und sowohl fรผr kontinuierliche als auch diskontinuierliche Galerkin-Verfahren auf Systemen mit verteiltem Speicher anwendbar sind. Sowohl der Funktionsraum als auch das Gitter kรถnnen wรคhrend des Lรถsungsprozesses dynamisch angepasst werden. Im Besonderen erlรคutert werden die eindeutige Nummerierung von Freiheitsgraden mit kontinuierlichen Galerkin-Verfahren, die Kommunikation von Daten variabler GrรถรŸe und die Lastenverteilung. AuรŸerdem werden Strategien zur Bestimmung des Adaptierungstyps auf der Grundlage von Fehlerschรคtzungen und -prognosen sowie Glattheitsschรคtzungen vorgestellt, die รผber die Zerfallsrate von Koeffizienten aus Reihenentwicklungen nach Fourier und Legendre bestimmt werden. Dabei werden sowohl Verfeinerung als auch Vergrรถberung berรผcksichtigt. Eine Referenzimplementierung erfolgt in der Open-Source-Bibliothek deal.II und wird auf das Laplace-Problem auf einem Gebiet mit einer einschneidenden Ecke angewandt, die eine Singularitรคt aufweist. Anhand dieses Beispiels werden die Vorteile der hp-adaptiven Methoden hinsichtlich der Fehlerkonvergenz und die Skalierbarkeit der prรคsentierten Algorithmen auf bis zu 49.152 MPI-Prozessen demonstriert

    New Automatic search method for Truncated-differential characteristics: Application to Midori, SKINNY and CRAFT

    Get PDF
    In this paper, using Mixed Integer Linear Programming, a new automatic search tool for truncated differential characteristic is presented. Our method models the problem of finding a maximal probability truncated differential characteristic, which is able to distinguish the cipher from a pseudo random permutation. Using this method, we analyse Midori64, SKINNY64/X and CRAFT block ciphers, for all of which the existing results are improved. In all cases, the truncated differential characteristic is much more efficient than the (upper bound of) bit-wise differential characteristic proven by the designers, for any number of rounds. More specifically, the highest possible rounds, for which an efficient differential characteristic can exist for Midori64, SKINNY64/X and CRAFT are 6, 7 and 10 rounds respectively, for which differential characteristics with maximum probabilities of 2โˆ’602^{-60}, 2โˆ’522^{-52} and 2โˆ’62.612^{-62.61} (may) exist. Using our new method, we introduce new truncated differential characteristics for these ciphers with respective probabilities 2โˆ’542^{-54}, 2โˆ’42^{-4} and 2โˆ’242^{-24} at the same number of rounds. Moreover, the longest truncated differential characteristics found for SKINNY64/X and CRAFT have 10 and 12 rounds, respectively. This method can be used as a new tool for differential analysis of SPN block ciphers

    MILP-aided Cryptanalysis of Round Reduced ChaCha

    Get PDF
    The inclusion of ChaCha20 and Poly1305 into the list of supported ciphers in TLS 1.3 necessitates a security evaluation of those ciphers with all the state-of-the-art tools and innovative cryptanalysis methodologies. Mixed Integer Linear Programming (MILP) has been successfully applied to find more accurate characteristics of several ciphers such as SIMON and SPECK. In our research, we use MILP-aided cryptanalysis to search for differential characteristics, linear approximations and integral properties of ChaCha. We are able to find differential trails up to 2 rounds and linear trails up to 1 round. However, no integral distinguisher has been found, even for 1 round

    Mind the Gap - A Closer Look at the Security of Block Ciphers against Differential Cryptanalysis

    Get PDF
    Resistance against differential cryptanalysis is an important design criteria for any modern block cipher and most designs rely on finding some upper bound on probability of single differential characteristics. However, already at EUROCRYPT'91, Lai et al. comprehended that differential cryptanalysis rather uses differentials instead of single characteristics. In this paper, we consider exactly the gap between these two approaches and investigate this gap in the context of recent lightweight cryptographic primitives. This shows that for many recent designs like Midori, Skinny or Sparx one has to be careful as bounds from counting the number of active S-boxes only give an inaccurate evaluation of the best differential distinguishers. For several designs we found new differential distinguishers and show how this gap evolves. We found an 8-round differential distinguisher for Skinny-64 with a probability of 2โˆ’56.932โˆ’56.93, while the best single characteristic only suggests a probability of 2โˆ’722โˆ’72. Our approach is integrated into publicly available tools and can easily be used when developing new cryptographic primitives. Moreover, as differential cryptanalysis is critically dependent on the distribution over the keys for the probability of differentials, we provide experiments for some of these new differentials found, in order to confirm that our estimates for the probability are correct. While for Skinny-64 the distribution over the keys follows a Poisson distribution, as one would expect, we noticed that Speck-64 follows a bimodal distribution, and the distribution of Midori-64 suggests a large class of weak keys

    ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ํŒจํ„ด ๋ถ„์„์„ ์œ„ํ•œ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์žฅ๋ณ‘ํƒ.Pattern recognition within time series data became an important avenue of research in artificial intelligence following the paradigm shift of the fourth industrial revolution. A number of studies related to this have been conducted over the past few years, and research using deep learning techniques are becoming increasingly popular. Due to the nonstationary, nonlinear and noisy nature of time series data, it is essential to design an appropriate model to extract its significant features for pattern recognition. This dissertation not only discusses the study of pattern recognition using various hand-crafted feature engineering techniques using physiological time series signals, but also suggests an end-to-end deep learning design methodology without any feature engineering. Time series signal can be classified into signals having periodic and non-periodic characteristics in the time domain. This thesis proposes two end-to-end deep learning design methodologies for pattern recognition of periodic and non-periodic signals. The first proposed deep learning design methodology is Deep ECGNet. Deep ECGNet offers a design scheme for an end-to-end deep learning model using periodic characteristics of Electrocardiogram (ECG) signals. ECG, recorded from the electrophysiologic patterns of heart muscle during heartbeat, could be a promising candidate to provide a biomarker to estimate event-based stress level. Conventionally, the beat-to-beat alternations, heart rate variability (HRV), from ECG have been utilized to monitor the mental stress status as well as the mortality of cardiac patients. These HRV parameters have the disadvantage of having a 5-minute measurement period. In this thesis, human's stress states were estimated without special hand-crafted feature engineering using only 10-second interval data with the deep learning model. The design methodology of this model incorporates the periodic characteristics of the ECG signal into the model. The main parameters of 1D CNNs and RNNs reflecting the periodic characteristics of ECG were updated corresponding to the stress states. The experimental results proved that the proposed method yielded better performance than those of the existing HRV parameter extraction methods and spectrogram methods. The second proposed methodology is an automatic end-to-end deep learning design methodology using Bayesian optimization for non-periodic signals. Electroencephalogram (EEG) is elicited from the central nervous system (CNS) to yield genuine emotional states, even at the unconscious level. Due to the low signal-to-noise ratio (SNR) of EEG signals, spectral analysis in frequency domain has been conventionally applied to EEG studies. As a general methodology, EEG signals are filtered into several frequency bands using Fourier or wavelet analyses and these band features are then fed into a classifier. This thesis proposes an end-to-end deep learning automatic design method using optimization techniques without this basic feature engineering. Bayesian optimization is a popular optimization technique for machine learning to optimize model hyperparameters. It is often used in optimization problems to evaluate expensive black box functions. In this thesis, we propose a method to perform whole model hyperparameters and structural optimization by using 1D CNNs and RNNs as basic deep learning models and Bayesian optimization. In this way, this thesis proposes the Deep EEGNet model as a method to discriminate human emotional states from EEG signals. Experimental results proved that the proposed method showed better performance than that of conventional method based on the conventional band power feature method. In conclusion, this thesis has proposed several methodologies for time series pattern recognition problems from the feature engineering-based conventional methods to the end-to-end deep learning design methodologies with only raw time series signals. Experimental results showed that the proposed methodologies can be effectively applied to pattern recognition problems using time series data.์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด ์ธ์‹ ๋ฌธ์ œ๋Š” 4์ฐจ ์‚ฐ์—… ํ˜๋ช…์˜ ํŒจ๋Ÿฌ๋‹ค์ž„ ์ „ํ™˜๊ณผ ํ•จ๊ป˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ธ๊ณต ์ง€๋Šฅ์˜ ํ•œ ๋ถ„์•ผ๊ฐ€ ๋˜์—ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ, ์ง€๋‚œ ๋ช‡ ๋…„๊ฐ„ ์ด์™€ ๊ด€๋ จ๋œ ๋งŽ์€ ์—ฐ๊ตฌ๋“ค์ด ์ด๋ฃจ์–ด์ ธ ์™”์œผ๋ฉฐ, ์ตœ๊ทผ์—๋Š” ์‹ฌ์ธต ํ•™์Šต๋ง (deep learning networks) ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์—ฐ๊ตฌ๋“ค์ด ์ฃผ๋ฅผ ์ด๋ฃจ์–ด ์™”๋‹ค. ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ๋Š” ๋น„์ •์ƒ, ๋น„์„ ํ˜• ๊ทธ๋ฆฌ๊ณ  ์žก์Œ (nonstationary, nonlinear and noisy) ํŠน์„ฑ์œผ๋กœ ์ธํ•˜์—ฌ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด ์ธ์‹ ์ˆ˜ํ–‰์„ ์œ„ํ•ด์„ , ๋ฐ์ดํ„ฐ์˜ ์ฃผ์š”ํ•œ ํŠน์ง•์ ์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ ํ™”๋œ ๋ชจ๋ธ์˜ ์„ค๊ณ„๊ฐ€ ํ•„์ˆ˜์ ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋Œ€ํ‘œ์ ์ธ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์ธ ์ƒ์ฒด ์‹ ํ˜ธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๋ฐฉ๋ฒ• (hand-crafted feature engineering methods)์„ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹ ๊ธฐ๋ฒ•์— ๋Œ€ํ•˜์—ฌ ๋…ผํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ถ๊ทน์ ์œผ๋กœ๋Š” ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ •์ด ์—†๋Š” ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ ๋‚ด์šฉ์„ ๋‹ด๊ณ  ์žˆ๋‹ค. ์‹œ๊ณ„์—ด ์‹ ํ˜ธ๋Š” ์‹œ๊ฐ„ ์ถ• ์ƒ์—์„œ ํฌ๊ฒŒ ์ฃผ๊ธฐ์  ์‹ ํ˜ธ์™€ ๋น„์ฃผ๊ธฐ์  ์‹ ํ˜ธ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๋Ÿฌํ•œ ๋‘ ์œ ํ˜•์˜ ์‹ ํ˜ธ๋“ค์— ๋Œ€ํ•œ ํŒจํ„ด ์ธ์‹์„ ์œ„ํ•ด ๋‘ ๊ฐ€์ง€ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง์— ๋Œ€ํ•œ ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•ด ์„ค๊ณ„๋œ ๋ชจ๋ธ์€ ์‹ ํ˜ธ์˜ ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ์ด์šฉํ•œ Deep ECGNet์ด๋‹ค. ์‹ฌ์žฅ ๊ทผ์œก์˜ ์ „๊ธฐ ์ƒ๋ฆฌํ•™์  ํŒจํ„ด์œผ๋กœ๋ถ€ํ„ฐ ๊ธฐ๋ก๋œ ์‹ฌ์ „๋„ (Electrocardiogram, ECG)๋Š” ์ด๋ฒคํŠธ ๊ธฐ๋ฐ˜ ์ŠคํŠธ๋ ˆ์Šค ์ˆ˜์ค€์„ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•œ ์ฒ™๋„ (bio marker)๋ฅผ ์ œ๊ณตํ•˜๋Š” ์œ ํšจํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋‹ค. ์ „ํ†ต์ ์œผ๋กœ ์‹ฌ์ „๋„์˜ ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ (Herat Rate Variability, HRV) ๋งค๊ฐœ๋ณ€์ˆ˜ (parameter)๋Š” ์‹ฌ์žฅ ์งˆํ™˜ ํ™˜์ž์˜ ์ •์‹ ์  ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ ๋ฐ ์‚ฌ๋ง๋ฅ ์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ํ•˜์ง€๋งŒ, ํ‘œ์ค€ ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ ๋งค๊ฐœ ๋ณ€์ˆ˜๋Š” ์ธก์ • ์ฃผ๊ธฐ๊ฐ€ 5๋ถ„ ์ด์ƒ์œผ๋กœ, ์ธก์ • ์‹œ๊ฐ„์ด ๊ธธ๋‹ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ฌ์ธต ํ•™์Šต๋ง ๋ชจ๋ธ์„ ์ด์šฉํ•˜์—ฌ 10์ดˆ ๊ฐ„๊ฒฉ์˜ ECG ๋ฐ์ดํ„ฐ๋งŒ์„ ์ด์šฉํ•˜์—ฌ, ์ถ”๊ฐ€์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์ถ”์ถœ ๊ณผ์ • ์—†์ด ์ธ๊ฐ„์˜ ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ๋ฅผ ์ธ์‹ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค. ์ œ์•ˆ๋œ ์„ค๊ณ„ ๊ธฐ๋ฒ•์€ ECG ์‹ ํ˜ธ์˜ ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ๋ชจ๋ธ์— ๋ฐ˜์˜ํ•˜์˜€๋Š”๋ฐ, ECG์˜ ์€๋‹‰ ํŠน์ง• ์ถ”์ถœ๊ธฐ๋กœ ์‚ฌ์šฉ๋œ 1D CNNs ๋ฐ RNNs ๋ชจ๋ธ์˜ ์ฃผ์š” ๋งค๊ฐœ ๋ณ€์ˆ˜์— ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ๋ฐ˜์˜ํ•จ์œผ๋กœ์จ, ํ•œ ์ฃผ๊ธฐ ์‹ ํ˜ธ์˜ ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ์— ๋”ฐ๋ฅธ ์ฃผ์š” ํŠน์ง•์ ์„ ์ข…๋‹จ ํ•™์Šต๋ง ๋‚ด๋ถ€์ ์œผ๋กœ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์ด ๊ธฐ์กด ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ ๋งค๊ฐœ๋ณ€์ˆ˜์™€ spectrogram ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ํŒจํ„ด ์ธ์‹ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ๋น„ ์ฃผ๊ธฐ์ ์ด๋ฉฐ ๋น„์ •์ƒ, ๋น„์„ ํ˜• ๊ทธ๋ฆฌ๊ณ  ์žก์Œ ํŠน์„ฑ์„ ์ง€๋‹Œ ์‹ ํ˜ธ์˜ ํŒจํ„ด์ธ์‹์„ ์œ„ํ•œ ์ตœ์  ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์ž๋™ ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์ด๋‹ค. ๋‡ŒํŒŒ ์‹ ํ˜ธ (Electroencephalogram, EEG)๋Š” ์ค‘์ถ” ์‹ ๊ฒฝ๊ณ„ (CNS)์—์„œ ๋ฐœ์ƒ๋˜์–ด ๋ฌด์˜์‹ ์ƒํƒœ์—์„œ๋„ ๋ณธ์—ฐ์˜ ๊ฐ์ • ์ƒํƒœ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š”๋ฐ, EEG ์‹ ํ˜ธ์˜ ๋‚ฎ์€ ์‹ ํ˜ธ ๋Œ€ ์žก์Œ๋น„ (SNR)๋กœ ์ธํ•ด ๋‡ŒํŒŒ๋ฅผ ์ด์šฉํ•œ ๊ฐ์ • ์ƒํƒœ ํŒ์ •์„ ์œ„ํ•ด์„œ ์ฃผ๋กœ ์ฃผํŒŒ์ˆ˜ ์˜์—ญ์˜ ์ŠคํŽ™ํŠธ๋Ÿผ ๋ถ„์„์ด ๋‡ŒํŒŒ ์—ฐ๊ตฌ์— ์ ์šฉ๋˜์–ด ์™”๋‹ค. ํ†ต์ƒ์ ์œผ๋กœ ๋‡ŒํŒŒ ์‹ ํ˜ธ๋Š” ํ‘ธ๋ฆฌ์— (Fourier) ๋˜๋Š” ์›จ์ด๋ธ”๋ › (wavelet) ๋ถ„์„์„ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ฃผํŒŒ์ˆ˜ ๋Œ€์—ญ์œผ๋กœ ํ•„ํ„ฐ๋ง ๋œ๋‹ค. ์ด๋ ‡๊ฒŒ ์ถ”์ถœ๋œ ์ฃผํŒŒ์ˆ˜ ํŠน์ง• ๋ฒกํ„ฐ๋Š” ๋ณดํ†ต ์–•์€ ํ•™์Šต ๋ถ„๋ฅ˜๊ธฐ (shallow machine learning classifier)์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜์–ด ํŒจํ„ด ์ธ์‹์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ๋œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๊ธฐ๋ณธ์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ •์ด ์—†๋Š” ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” (Bayesian optimization) ๊ธฐ๋ฒ•์„ ์ด์šฉํ•œ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์ž๋™ ์„ค๊ณ„ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์€ ์ดˆ ๋งค๊ฐœ๋ณ€์ˆ˜ (hyperparamters)๋ฅผ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๊ณ„ ํ•™์Šต ๋ถ„์•ผ์˜ ๋Œ€ํ‘œ์ ์ธ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์ธ๋ฐ, ์ตœ์ ํ™” ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ๋ชฉ์  ํ•จ์ˆ˜ (expensive black box function)๋ฅผ ๊ฐ–๊ณ  ์žˆ๋Š” ์ตœ์ ํ™” ๋ฌธ์ œ์— ์ ํ•ฉํ•˜๋‹ค. ์ด๋Ÿฌํ•œ ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™”๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ธฐ๋ณธ์ ์ธ ํ•™์Šต ๋ชจ๋ธ์ธ 1D CNNs ๋ฐ RNNs์˜ ์ „์ฒด ๋ชจ๋ธ์˜ ์ดˆ ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ฐ ๊ตฌ์กฐ์  ์ตœ์ ํ™”๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€์œผ๋ฉฐ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ๋ฐ”ํƒ•์œผ๋กœ Deep EEGNet์ด๋ผ๋Š” ์ธ๊ฐ„์˜ ๊ฐ์ •์ƒํƒœ๋ฅผ ํŒ๋ณ„ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ชจ๋ธ์ด ๊ธฐ์กด์˜ ์ฃผํŒŒ์ˆ˜ ํŠน์ง• ๋ฒกํ„ฐ (band power feature) ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ์ „ํ†ต์ ์ธ ๊ฐ์ • ํŒจํ„ด ์ธ์‹ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ฒฐ๋ก ์ ์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์€ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹๋ฌธ์ œ๋ฅผ ์—ฌ๋Ÿฌ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ์ „ํ†ต์ ์ธ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ถ€ํ„ฐ, ์ถ”๊ฐ€์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ • ์—†์ด ์›๋ณธ ๋ฐ์ดํ„ฐ๋งŒ์„ ์ด์šฉํ•˜์—ฌ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง์„ ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•๊นŒ์ง€ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋˜ํ•œ, ๋‹ค์–‘ํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์ด ์‹œ๊ณ„์—ด ์‹ ํ˜ธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹ ๋ฌธ์ œ์— ํšจ๊ณผ์ ์œผ๋กœ ์ ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Pattern Recognition in Time Series 1 1.2 Major Problems in Conventional Approaches 7 1.3 The Proposed Approach and its Contribution 8 1.4 Thesis Organization 10 Chapter 2 Related Works 12 2.1 Pattern Recognition in Time Series using Conventional Methods 12 2.1.1 Time Domain Features 12 2.1.2 Frequency Domain Features 14 2.1.3 Signal Processing based on Multi-variate Empirical Mode Decomposition (MEMD) 15 2.1.4 Statistical Time Series Model (ARIMA) 18 2.2 Fundamental Deep Learning Algorithms 20 2.2.1 Convolutional Neural Networks (CNNs) 20 2.2.2 Recurrent Neural Networks (RNNs) 22 2.3 Hyper Parameters and Structural Optimization Techniques 24 2.3.1 Grid and Random Search Algorithms 24 2.3.2 Bayesian Optimization 25 2.3.3 Neural Architecture Search 28 2.4 Research Trends related to Time Series Data 29 2.4.1 Generative Model of Raw Audio Waveform 30 Chapter 3 Preliminary Researches: Patten Recognition in Time Series using Various Feature Extraction Methods 31 3.1 Conventional Methods using Time and Frequency Features: Motor Imagery Brain Response Classification 31 3.1.1 Introduction 31 3.1.2 Methods 32 3.1.3 Ensemble Classification Method (Stacking & AdaBoost) 32 3.1.4 Sensitivity Analysis 33 3.1.5 Classification Results 36 3.2 Statistical Feature Extraction Methods: ARIMA Model Based Feature Extraction Methodology 38 3.2.1 Introduction 38 3.2.2 ARIMA Model 38 3.2.3 Signal Processing 39 3.2.4 ARIMA Model Conformance Test 40 3.2.5 Experimental Results 40 3.2.6 Summary 43 3.3 Application on Specific Time Series Data: Human Stress States Recognition using Ultra-Short-Term ECG Spectral Feature 44 3.3.1 Introduction 44 3.3.2 Experiments 45 3.3.3 Classification Methods 49 3.3.4 Experimental Results 49 3.3.5 Summary 56 Chapter 4 Master Framework for Pattern Recognition in Time Series 57 4.1 The Concept of the Proposed Framework for Pattern Recognition in Time Series 57 4.1.1 Optimal Basic Deep Learning Models for the Proposed Framework 57 4.2 Two Categories for Pattern Recognition in Time Series Data 59 4.2.1 The Proposed Deep Learning Framework for Periodic Time Series Signals 59 4.2.2 The Proposed Deep Learning Framework for Non-periodic Time Series Signals 61 4.3 Expanded Models of the Proposed Master Framework for Pattern Recogntion in Time Series 63 Chapter 5 Deep Learning Model Design Methodology for Periodic Signals using Prior Knowledge: Deep ECGNet 65 5.1 Introduction 65 5.2 Materials and Methods 67 5.2.1 Subjects and Data Acquisition 67 5.2.2 Conventional ECG Analysis Methods 72 5.2.3 The Initial Setup of the Deep Learning Architecture 75 5.2.4 The Deep ECGNet 78 5.3 Experimental Results 83 5.4 Summary 98 Chapter 6 Deep Learning Model Design Methodology for Non-periodic Time Series Signals using Optimization Techniques: Deep EEGNet 100 6.1 Introduction 100 6.2 Materials and Methods 104 6.2.1 Subjects and Data Acquisition 104 6.2.2 Conventional EEG Analysis Methods 106 6.2.3 Basic Deep Learning Units and Optimization Technique 108 6.2.4 Optimization for Deep EEGNet 109 6.2.5 Deep EEGNet Architectures using the EEG Channel Grouping Scheme 111 6.3 Experimental Results 113 6.4 Summary 124 Chapter 7 Concluding Remarks 126 7.1 Summary of Thesis and Contributions 126 7.2 Limitations of the Proposed Methods 128 7.3 Suggestions for Future Works 129 Bibliography 131 ์ดˆ ๋ก 139Docto

    Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly (Full Version)

    Get PDF
    The cube attack is an important technique for the cryptanalysis of symmetric key primitives, especially for stream ciphers. Aiming at recovering some secret key bits, the adversary reconstructs a superpoly with the secret key bits involved, by summing over a set of the plaintexts/IV which is called a cube. Traditional cube attack only exploits linear/quadratic superpolies. Moreover, for a long time after its proposal, the size of the cubes has been largely confined to an experimental range, e.g., typically 40. These limits were first overcome by the division property based cube attacks proposed by Todo et al. at CRYPTO 2017. Based on MILP modelled division property, for a cube (index set) II, they identify the small (index) subset JJ of the secret key bits involved in the resultant superpoly. During the precomputation phase which dominates the complexity of the cube attacks, 2โˆฃIโˆฃ+โˆฃJโˆฃ2^{|I|+|J|} encryptions are required to recover the superpoly. Therefore, their attacks can only be available when the restriction โˆฃIโˆฃ+โˆฃJโˆฃ<n|I|+|J|<n is met. In this paper, we introduced several techniques to improve the division property based cube attacks by exploiting various algebraic properties of the superpoly. 1. We propose the ``flag\u27\u27 technique to enhance the preciseness of MILP models so that the proper non-cube IV assignments can be identified to obtain a non-constant superpoly. 2. A degree evaluation algorithm is presented to upper bound the degree of the superpoly. With the knowledge of its degree, the superpoly can be recovered without constructing its whole truth table. This enables us to explore larger cubes II\u27s even if โˆฃIโˆฃ+โˆฃJโˆฃโ‰ฅn|I|+|J|\geq n. 3. We provide a term enumeration algorithm for finding the monomials of the superpoly, so that the complexity of many attacks can be further reduced. As an illustration, we apply our techniques to attack the initialization of several ciphers. To be specific, our key recovery attacks have mounted to 839-round TRIVIUM, 891-round Kreyvium, 184-round Grain-128a and 750-round ACORN respectively

    Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly

    Get PDF
    The cube attack is an important technique for the cryptanalysis of symmetric key primitives, especially for stream ciphers. Aiming at recovering some secret key bits, the adversary reconstructs a superpoly with the secret key bits involved, by summing over a set of the plaintexts/IV which is called a cube. Traditional cube attack only exploits linear/quadratic superpolies. Moreover, for a long time after its proposal, the size of the cubes has been largely confined to an experimental range, e.g., typically 40. These limits were first overcome by the division property based cube attacks proposed by Todo et al. at CRYPTO 2017. Based on MILP modelled division property, for a cube (index set) II, they identify the small (index) subset JJ of the secret key bits involved in the resultant superpoly. During the precomputation phase which dominates the complexity of the cube attacks, 2|I|+|J|2|I|+|J| encryptions are required to recover the superpoly. Therefore, their attacks can only be available when the restriction |I|+|J|<n|I|+|J|<n is met. In this paper, we introduced several techniques to improve the division property based cube attacks by exploiting various algebraic properties of the superpoly. 1. We propose the ``flag'' technique to enhance the preciseness of MILP models so that the proper non-cube IV assignments can be identified to obtain a non-constant superpoly. 2. A degree evaluation algorithm is presented to upper bound the degree of the superpoly. With the knowledge of its degree, the superpoly can be recovered without constructing its whole truth table. This enables us to explore larger cubes II's even if |I|+|J|โ‰ฅn|I|+|J|โ‰ฅn. 3. We provide a term enumeration algorithm for finding the monomials of the superpoly, so that the complexity of many attacks can be further reduced. As an illustration, we apply our techniques to attack the initialization of several ciphers. To be specific, our key recovery attacks have mounted to 839-round TRIVIUM, 891-round Kreyvium, 184-round Grain-128a and 750-round ACORN respectively
    • โ€ฆ
    corecore