59 research outputs found

    Searching Model Structures Based on Marginal Model Structures

    Get PDF

    Various Security Analysis of a pfCM-MD Hash Domain Extension and Applications based on the Extension

    Get PDF
    We propose a new hash domain extension \textit{a prefix-free-Counter-Masking-MD (pfCM-MD)}. And, among security notions for the hash function, we focus on the indifferentiable security notion by which we can check whether the structure of a given hash function has any weakness or not. Next, we consider the security of HMAC, two new prf constructions, NIST SP 800-56A key derivation function, and the randomized hashing in NIST SP 800-106, where all of them are based on the pfCM-MD. Especially, due to the counter of the pfCM-MD, the pfCM-MD are secure against all of generic second-preimage attacks such as Kelsey-Schneier attack \cite{KeSc05} and Elena {\em et al.}\u27 attck \cite{AnBoFoHoKeShZi08}. Our proof technique and most of notations follow those in \cite{BeDaPeAs08,Bellare06,BeCaKr96a}

    Indifferentiable Security Analysis of choppfMD, chopMD, a chopMDP, chopWPH, chopNI, chopEMD, chopCS, and chopESh Hash Domain Extensions

    Get PDF
    We provide simple and unified indifferentiable security analyses of choppfMD, chopMD, a chopMDP (where the permutation PP is to be xored with any non-zero constant.), chopWPH (the chopped version of Wide-Pipe Hash proposed in \cite{Lucks05}), chopEMD, chopNI, chopCS, chopESh hash domain extensions. Even though there are security analysis of them in the case of no-bit chopping (i.e., s=0s=0), there is no unified way to give security proofs. All our proofs in this paper follow the technique introduced in \cite{BeDaPeAs08}. These proofs are simple and easy to follow

    A Study on Facial Expression Change Detection Using Machine Learning Methods with Feature Selection Technique

    Get PDF
    Along with the fourth industrial revolution, research in the biomedical engineering field is being actively conducted. Among these research fields, the brain-computer interface (BCI) research, which studies the direct interaction between the brain and external devices, is in the spotlight. However, in the case of electroencephalograph (EEG) data measured through BCI, there are a huge number of features, which can lead to many difficulties in analysis because of complex relationships between features. For this reason, research on BCIs using EEG data is often insufficient. Therefore, in this study, we develop the methodology for selecting features for a specific type of BCI that predicts whether a person correctly detects facial expression changes or not by classifying EEG-based features. We also investigate whether specific EEG features affect expression change detection. Various feature selection methods were used to check the influence of each feature on expression change detection, and the best combination was selected using several machine learning classification techniques. As a best result of the classification accuracy, 71% of accuracy was obtained with XGBoost using 52 features. EEG topography was confirmed using the selected major features, showing that the detection of changes in facial expression largely engages brain activity in the frontal regions

    Estimate-based goodness-of-fit test for large sparse multinomial distributions

    No full text
    The Pearson's chi-squared statistic (X2) does not in general follow a chi-square distribution when it is used for goodness-of-fit testing for a multinomial distribution based on sparse contingency table data. We explore properties of [Zelterman, D., 1987. Goodness-of-fit tests for large sparse multinomial distributions. J. Amer. Statist. Assoc. 82 (398), 624-629] D2 statistic and compare them with those of X2 and compare the power of goodness-of-fit test among the tests using D2, X2, and the statistic (Lr) which is proposed by [Maydeu-Olivares, A., Joe, H., 2005. Limited- and full-information estimation and goodness-of-fit testing in 2n contingency tables: A unified framework. J. Amer. Statist. Assoc. 100 (471), 1009-1020] when the given contingency table is very sparse. We show that the variance of D2 is not larger than the variance of X2 under null hypotheses where all the cell probabilities are positive, that the distribution of D2 becomes more skewed as the multinomial distribution becomes more asymmetric and sparse, and that, as for the Lr statistic, the power of the goodness-of-fit testing depends on the models which are selected for the testing. A simulation experiment strongly recommends to use both D2 and Lr for goodness-of-fit testing with large sparse contingency table data.
    • ā€¦
    corecore