1,156 research outputs found
Exclusive Exponent Blinding May Not Suffice to Prevent Timing Attacks on RSA
The references [9,3,1] treat timing attacks on RSA with CRT and Montgomery\u27s multiplication algorithm in unprotected implementations.
It has been widely believed that exponent blinding would prevent any timing attack on RSA.
At cost of significantly more timing measurements this paper extends the before-mentioned attacks to RSA with CRT when Montgomery\u27s multiplication algorithm and exponent blinding are applied.
Simulation experiments are conducted, which confirm the theoretical results. Effective countermeasures exist. In particular, the attack efficiency is higher than in the previous version [12] while large parts of both papers coincide
Weather Influence and Classification with Automotive Lidar Sensors
Lidar sensors are often used in mobile robots and autonomous vehicles to
complement camera, radar and ultrasonic sensors for environment perception.
Typically, perception algorithms are trained to only detect moving and static
objects as well as ground estimation, but intentionally ignore weather effects
to reduce false detections. In this work, we present an in-depth analysis of
automotive lidar performance under harsh weather conditions, i.e. heavy rain
and dense fog. An extensive data set has been recorded for various fog and rain
conditions, which is the basis for the conducted in-depth analysis of the point
cloud under changing environmental conditions. In addition, we introduce a
novel approach to detect and classify rain or fog with lidar sensors only and
achieve an mean union over intersection of 97.14 % for a data set in controlled
environments. The analysis of weather influences on the performance of lidar
sensors and the weather detection are important steps towards improving safety
levels for autonomous driving in adverse weather conditions by providing
reliable information to adapt vehicle behavior.Comment: 8 pages, will be published in the IEEE IV 2019 Proceeding
Timing attacks and local timing attacks against Barrett’s modular multiplication algorithm
Montgomery’s and Barrett’s modular multiplication algorithms are widely used in modular exponentiation algorithms, e.g. to compute RSA or ECC operations. While Montgomery’s multiplication algorithm has been studied extensively in the literature and many side-channel attacks have been detected, to our best knowledge no thorough analysis exists for Barrett’s multiplication algorithm. This article closes this gap. For both Montgomery’s and Barrett’s multiplication algorithm, differences of the execution times are caused by conditional integer subtractions, so-called extra reductions. Barrett’s multiplication algorithm allows even two extra reductions, and this feature increases the mathematical difficulties significantly.
We formulate and analyse a two-dimensional Markov process, from which we deduce relevant stochastic properties of Barrett’s multiplication algorithm within modular exponentiation algorithms. This allows to transfer the timing attacks and local timing attacks (where a second side-channel attack exhibits the execution times of the particular modular squarings and multiplications) on Montgomery’s multiplication algorithm to attacks on Barrett’s algorithm. However, there are also differences. Barrett’s multiplication algorithm requires additional attack substeps, and the attack efficiency is much more sensitive to variations of the parameters. We treat timing attacks on RSA with CRT, on RSA without CRT, and on Diffie-Hellman, as well as local timing attacks against these algorithms in the presence of basis blinding. Experiments confirm our theoretical results
Subsampling and Knowledge Distillation On Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations
This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint by the current authors (e-print report 2019/094) and later as a contribution to a preprint write-up of the solutions by the three winning teams (e-print report 2019/860).
Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge (with an expected success rate of roughly 70 percent if about 100 CPU hours are allowed for the equation solving stage of the attack). This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution.
Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model.
Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist.
The experiments described in this paper can be reproduced using code available at https://github.com/agohr/ches2018
Breaking Masked Implementations of the Clyde-Cipher by Means of Side-Channel Analysis
In this paper we present our solution to the CHES Challenge 2020, the task of which it was to break masked hardware respective software implementations of the lightweight cipher Clyde by means of side-channel analysis. We target the secret cipher state after processing of the first S-box layer. Using the provided trace data we obtain a strongly biased posterior distribution for the secret-shared cipher state at the targeted point; this enables us to see exploitable biases even before the secret sharing based masking. These biases on the unshared state can be evaluated one S-box at a time and combined across traces, which enables us to recover likely key hypotheses S-box by S-box.
In order to see the shared cipher state, we employ a deep neural network similar to the one used by Gohr, Jacob and Schindler to solve the CHES 2018 AES challenge. We modify their architecture to predict the exact bit sequence of the secret-shared cipher state. We find that convergence of training on this task is unsatisfying with the standard encoding of the shared cipher state and therefore introduce a different encoding of the prediction target, which we call the scattershot encoding. In order to further investigate how exactly the scattershot encoding helps to solve the task at hand, we construct a simple synthetic task where convergence problems very similar to those we observed in our side-channel task appear with the naive target data encoding but disappear with the scattershot encoding.
We complete our analysis by showing results that we obtained with a “classical” method (as opposed to an AI-based method), namely the stochastic approach, that
we generalize for this purpose first to the setting of shared keys. We show that the neural network draws on a much broader set of features, which may partially explain why the neural-network based approach massively outperforms the stochastic approach. On the other hand, the stochastic approach provides insights into properties of the implementation, in particular the observation that the S-boxes behave very different regarding the easiness respective hardness of their prediction
CHES 2018 Side Channel Contest CTF - Solution of the AES Challenges
Alongside CHES 2018 the side channel contest \u27Deep learning vs. classic profiling\u27 was held.
Our team won both AES challenges (masked AES implementation), working under the handle AGSJWS.
Here we describe and analyse our attack.
We can solve the more difficult of the two challenges with to power traces, which is much less than was available in the contest.
Our attack combines techniques from machine learning with classical techniques. The attack was superior to all classical and deep learning based attacks which we have tried. Moreover, it provides some insights on the implementation
Standardized visual EEG features predict outcome in patients with acute consciousness impairment of various etiologies.
Early prognostication in patients with acute consciousness impairment is a challenging but essential task. Current prognostic guidelines vary with the underlying etiology. In particular, electroencephalography (EEG) is the most important paraclinical examination tool in patients with hypoxic ischemic encephalopathy (HIE), whereas it is not routinely used for outcome prediction in patients with traumatic brain injury (TBI).
Data from 364 critically ill patients with acute consciousness impairment (GCS ≤ 11 or FOUR ≤ 12) of various etiologies and without recent signs of seizures from a prospective randomized trial were retrospectively analyzed. Random forest classifiers were trained using 8 visual EEG features-first alone, then in combination with clinical features-to predict survival at 6 months or favorable functional outcome (defined as cerebral performance category 1-2).
The area under the ROC curve was 0.812 for predicting survival and 0.790 for predicting favorable outcome using EEG features. Adding clinical features did not improve the overall performance of the classifier (for survival: AUC = 0.806, p = 0.926; for favorable outcome: AUC = 0.777, p = 0.844). Survival could be predicted in all etiology groups: the AUC was 0.958 for patients with HIE, 0.955 for patients with TBI and other neurosurgical diagnoses, 0.697 for patients with metabolic, inflammatory or infectious causes for consciousness impairment and 0.695 for patients with stroke. Training the classifier separately on subgroups of patients with a given etiology (and thus using less training data) leads to poorer classification performance.
While prognostication was best for patients with HIE and TBI, our study demonstrates that similar EEG criteria can be used in patients with various causes of consciousness impairment, and that the size of the training set is more important than homogeneity of ACI etiology
Metal enrichment of the intra-cluster medium by thermally and cosmic-ray driven galactic winds
We investigate the efficiency and time-dependence of thermally and cosmic ray
driven galactic winds for the metal enrichment of the intra-cluster medium
(ICM) using a new analytical approximation for the mass outflow. The spatial
distribution of the metals are studied using radial metallicity profiles and 2D
metallicity maps of the model clusters as they would be observed by X-ray
telescopes like XMM-Newton. Analytical approximations for the mass loss by
galactic winds driven by thermal and cosmic ray pressure are derived from the
Bernoulli equation and implemented in combined N-body/hydrodynamic cosmological
simulations with a semi-analytical galaxy formation model. Observable
quantities like the mean metallicity, metallicity profiles, and 2D metal maps
of the model clusters are derived from the simulations. We find that galactic
winds alone cannot account for the observed metallicity of the ICM. At redshift
the model clusters have metallicities originating from galactic winds
which are almost a factor of 10 lower than the observed values. For massive,
relaxed clusters we find, as in previous studies, a central drop in the
metallicity due to a suppression of the galactic winds by the pressure of the
ambient ICM. Combining ram-pressure stripping and galactic winds we find radial
metallicity profiles of the model clusters which agree qualitatively with
observed profiles. Only in the inner parts of massive clusters the observed
profiles are steeper than in the simulations. Also the combination of galactic
winds and ram-pressure stripping yields too low values for the ICM
metallicities. The slope of the redshift evolution of the mean metallicity in
the simulations agrees reasonably well with recent observations.Comment: 9 pages, 6 figures, accepted by A&
Inhomogeneous Metal Distribution in the Intra-Cluster Medium
The hot gas that fills the space between galaxies in clusters is rich in
metals. In their large potential wells, galaxy clusters accumulate metals over
the whole cluster history and hence they retain important information on
cluster formation and evolution. We use a sample of 5 cool core clusters to
study the distribution of metals in the ICM. We investigate whether the X-ray
observations yield good estimates for the metal mass and whether the heavy
elements abundances are consistent with a certain relative fraction of SN Ia to
SNCC. We derive detailed metallicity maps of the clusters from XMM - Newton
observations and we use them as a measure for the metal mass in the ICM. We
determine radial profiles for several elements and using population synthesis
and chemical enrichment models, we study the agreement between the measured
abundances and the theoretical yields. We show that even in relaxed clusters
the distribution of metals show a lot of inhomogeneities. Using metal maps
usually gives a metal mass 10-30% higher than the metal mass computed using a
single extraction region, hence it is expected that most previous metal mass
determination have underestimated metal mass. The abundance ratio of
{\alpha}-elements to Fe, even in the central parts of clusters, are consistent
with an enrichment due to the combination of SN Ia and SNCC
Outline of Synthesis of Cognitive and Socio-cultural Foundations of Scientific Knowledge Evolution in Research Programs of Western Philosophy of Science
The article analyses the development of cognitive sociology of science, in the object field of which connection of cognitive and social structures of science is traced. The role of context in scientific knowledge formation is defined. It is stated that the basis for development of research program of cognitive sociology of science appeared to be reconsideration of the standard concept of science as a complex of gnoseological, epistemological and methodological interpretations of nature and morphology of the produced scientific knowledge, methods for its explanation and scientificity ideals. The difference between "strong" and «weak» varieties of scientific knowledge evolution, developed in western philosophy of science, is considered. "Social studies of science" are reviewed as a form of social constructivism and relativism, exhibiting their specific nature in macro-analytical and micro- analytical strategies of scientific knowledge evolution analysis. The thesis that multidimensionality of science cannot be adequately interpreted focusing only on conceptual history of science is proved
- …