2,217 research outputs found

    Fine-grained timing using genetic programming

    Get PDF
    In previous work, we have demonstrated that it is possible to use Genetic Programming to minimise the resource consumption of software, such as its power consumption or execution time. In this paper, we investigate the extent to which Genetic Programming can be used to gain fine-grained control over software timing. We introduce the ideas behind our work, and carry out experimentation to find that Genetic Programming is indeed able to produce software with unusual and desirable timing properties, where it is not obvious how a manual approach could replicate such results. In general, we discover that Genetic Programming is most effective in controlling statistical properties of software rather than precise control over its timing for individual inputs. This control may find useful application in cryptography and embedded systems

    Mitigating Branch-Shadowing Attacks on Intel SGX using Control Flow Randomization

    Full text link
    Intel Software Guard Extensions (SGX) is a promising hardware-based technology for protecting sensitive computations from potentially compromised system software. However, recent research has shown that SGX is vulnerable to branch-shadowing -- a side channel attack that leaks the fine-grained (branch granularity) control flow of an enclave (SGX protected code), potentially revealing sensitive data to the attacker. The previously-proposed defense mechanism, called Zigzagger, attempted to hide the control flow, but has been shown to be ineffective if the attacker can single-step through the enclave using the recent SGX-Step framework. Taking into account these stronger attacker capabilities, we propose a new defense against branch-shadowing, based on control flow randomization. Our scheme is inspired by Zigzagger, but provides quantifiable security guarantees with respect to a tunable security parameter. Specifically, we eliminate conditional branches and hide the targets of unconditional branches using a combination of compile-time modifications and run-time code randomization. We evaluated the performance of our approach by measuring the run-time overhead of ten benchmark programs of SGX-Nbench in SGX environment

    Evaluating Modeling and Validation Strategies for Tooth Loss

    Get PDF
    Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients' age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly

    Impact of Genetic Polymorphisms on the Smoking-related Risk of Periodontal Disease: the Population-based Study SHIP

    Get PDF
    Periodontitis is a bacterial inflammatory disease leading to attachment loss with the consequence of tooth loss. There exists a multifactorial risk pattern including bacterial challenge, smoking, age, sex, diabetes, socio-economic and genetic factors. Smoking has the highest impact on the course of the disease modulated by all the other factors. Here, we report the relationship between smoking and the polymorphisms of genetic polymorphisms inflicted in the pathogenesis

    On EPR paradox, Bell's inequalities and experiments which prove nothing

    Full text link
    This article shows that the there is no paradox. Violation of Bell's inequalities should not be identified with a proof of non locality in quantum mechanics. A number of past experiments is reviewed, and it is concluded that the experimental results should be re-evaluated. The results of the experiments with atomic cascade are shown not to contradict the local realism. The article points out flaws in the experiments with down-converted photons. The experiments with neutron interferometer on measuring the "contextuality" and Bell-like inequalities are analyzed, and it is shown that the experimental results can be explained without such notions. Alternative experiment is proposed to prove the validity of local realism.Comment: 27 pages, 8 figures. I edited a little the text and abstract I corrected equations (49) and (50

    System-level Non-interference for Constant-time Cryptography

    Get PDF
    International audienceCache-based attacks are a class of side-channel attacks that are particularly effective in virtualized or cloud-based en-vironments, where they have been used to recover secret keys from cryptographic implementations. One common ap-proach to thwart cache-based attacks is to use constant-time implementations, i.e. which do not branch on secrets and do not perform memory accesses that depend on secrets. How-ever, there is no rigorous proof that constant-time implemen-tations are protected against concurrent cache-attacks in virtualization platforms with shared cache; moreover, many prominent implementations are not constant-time. An alter-native approach is to rely on system-level mechanisms. One recent such mechanism is stealth memory, which provisions a small amount of private cache for programs to carry po-tentially leaking computations securely. Stealth memory in-duces a weak form of constant-time, called S-constant-time, which encompasses some widely used cryptographic imple-mentations. However, there is no rigorous analysis of stealth memory and S-constant-time, and no tool support for check-ing if applications are S-constant-time. We propose a new information-flow analysis that checks if an x86 application executes in constant-time, or in S-constant-time. Moreover, we prove that constant-time (resp. S-constant-time) programs do not leak confidential infor-mation through the cache to other operating systems exe-cuting concurrently on virtualization platforms (resp. plat-forms supporting stealth memory). The soundness proofs are based on new theorems of independent interest, includ-ing isolation theorems for virtualization platforms (resp. plat-forms supporting stealth memory), and proofs that constant-time implementations (resp. S-constant-time implementa-tions) are non-interfering with respect to a strict information flow policy which disallows that control flow and memory ac-cesses depend on secrets. We formalize our results using the Coq proof assistant and we demonstrate the effectiveness of our analyses on cryptographic implementations, including PolarSSL AES, DES and RC4, SHA256 and Salsa20

    Quantifying Timing Leaks and Cost Optimisation

    Full text link
    We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.Comment: 16 pages, 2 figures, 4 tables. A shorter version is included in the proceedings of ICICS'08 - 10th International Conference on Information and Communications Security, 20-22 October, 2008 Birmingham, U

    Leading-effect vs. Risk-taking in Dynamic Tournaments: Evidence from a Real-life Randomized Experiment

    Get PDF
    Two 'order effects' may emerge in dynamic tournaments with information feedback. First, participants adjust effort across stages, which could advantage the leading participant who faces a larger 'effective prize' after an initial victory (leading-effect). Second, participants lagging behind may increase risk at the final stage as they have 'nothing to lose' (risk-taking). We use a randomized natural experiment in professional two-game soccer tournaments where the treatment (order of a stage-specific advantage) and team characteristics, e.g. ability, are independent. We develop an identification strategy to test for leading-effects controlling for risk-taking. We find no evidence of leading-effects and negligible risk-taking effects
    • …
    corecore