843 research outputs found

    Mitigating Branch-Shadowing Attacks on Intel SGX using Control Flow Randomization

    Full text link
    Intel Software Guard Extensions (SGX) is a promising hardware-based technology for protecting sensitive computations from potentially compromised system software. However, recent research has shown that SGX is vulnerable to branch-shadowing -- a side channel attack that leaks the fine-grained (branch granularity) control flow of an enclave (SGX protected code), potentially revealing sensitive data to the attacker. The previously-proposed defense mechanism, called Zigzagger, attempted to hide the control flow, but has been shown to be ineffective if the attacker can single-step through the enclave using the recent SGX-Step framework. Taking into account these stronger attacker capabilities, we propose a new defense against branch-shadowing, based on control flow randomization. Our scheme is inspired by Zigzagger, but provides quantifiable security guarantees with respect to a tunable security parameter. Specifically, we eliminate conditional branches and hide the targets of unconditional branches using a combination of compile-time modifications and run-time code randomization. We evaluated the performance of our approach by measuring the run-time overhead of ten benchmark programs of SGX-Nbench in SGX environment

    A Turning Point for Verified Spectre Sandboxing

    Full text link
    Spectre attacks enable an attacker to access restricted data in an application's memory. Both the academic community and industry veterans have developed several mitigations to block Spectre attacks, but to date, very few have been formally vetted; most are "best effort" strategies. Formal guarantees are particularly crucial for protecting isolated environments like sandboxing against Spectre attacks. In such environments, a subtle flaw in the mitigation would allow untrusted code to break out of the sandbox and access trusted memory regions. In our work, we develop principled foundations to build isolated environments resistant against Spectre attacks. We propose a formal framework for reasoning about sandbox execution and Spectre attacks. We formalize properties that sound mitigation strategies must fulfill and we show how various existing mitigations satisfy (or fail to satisfy!) these properties

    Verifying constant-time implementations

    Get PDF
    The constant-time programming discipline is an effective countermeasure against timing attacks, which can lead to complete breaks of otherwise secure systems. However, adhering to constant-time programming is hard on its own, and extremely hard under additional efficiency and legacy constraints. This makes automated verification of constant-time code an essential component for building secure software. We propose a novel approach for verifying constant- time security of real-world code. Our approach is able to validate implementations that locally and intentionally violate the constant-time policy, when such violations are benign and leak no more information than the pub- lic outputs of the computation. Such implementations, which are used in cryptographic libraries to obtain impor- tant speedups or to comply with legacy APIs, would be declared insecure by all prior solutions. We implement our approach in a publicly available, cross-platform, and fully automated prototype, ct-verif, that leverages the SMACK and Boogie tools and verifies optimized LLVM implementations. We present verifica- tion results obtained over a wide range of constant-time components from the NaCl, OpenSSL, FourQ and other off-the-shelf libraries. The diversity and scale of our ex- amples, as well as the fact that we deal with top-level APIs rather than being limited to low-level leaf functions, distinguishes ct-verif from prior tools. Our approach is based on a simple reduction of constant-time security of a program P to safety of a prod- uct program Q that simulates two executions of P. We formalize and verify the reduction for a core high-level language using the Coq proof assistant.The first two authors were funded by Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020”, which is fi- nanced by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). The third and fourth authors were supported by projects S2013/ICE2731 N-GREENS Software-CM and ONR Grants N000141210914 (AutoCrypt) and N000141512750 (SynCrypt). The fourth author was also supported by FP7 Marie Cure Actions-COFUND 291803 (Amarout II). We thank Peter Schwabe for providing us with a collection of negative examples. We thank Hovav Shacham, Craig Costello and Patrick Longa for helpful observations on our verification results. TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020info:eu-repo/semantics/publishedVersio
    • …
    corecore