3,742 research outputs found

    CapablePtrs: Securely Compiling Partial Programs using the Pointers-as-Capabilities Principle

    Get PDF
    Capability machines such as CHERI provide memory capabilities that can be used by compilers to provide security benefits for compiled code (e.g., memory safety). The C to CHERI compiler, for example, achieves memory safety by following a principle called "pointers as capabilities" (PAC). Informally, PAC says that a compiler should represent a source language pointer as a machine code capability. But the security properties of PAC compilers are not yet well understood. We show that memory safety is only one aspect, and that PAC compilers can provide significant additional security guarantees for partial programs: the compiler can provide guarantees for a compilation unit, even if that compilation unit is later linked to attacker-controlled machine code. This paper is the first to study the security of PAC compilers for partial programs formally. We prove for a model of such a compiler that it is fully abstract. The proof uses a novel proof technique (dubbed TrICL, read trickle), which is of broad interest because it reuses and extends the compiler correctness relation in a natural way, as we demonstrate. We implement our compiler on top of the CHERI platform and show that it can compile legacy C code with minimal code changes. We provide performance benchmarks that show how performance overhead is proportional to the number of cross-compilation-unit function calls

    On-stack replacement, distilled

    Get PDF
    On-stack replacement (OSR) is essential technology for adaptive optimization, allowing changes to code actively executing in a managed runtime. The engineering aspects of OSR are well-known among VM architects, with several implementations available to date. However, OSR is yet to be explored as a general means to transfer execution between related program versions, which can pave the road to unprecedented applications that stretch beyond VMs. We aim at filling this gap with a constructive and provably correct OSR framework, allowing a class of general-purpose transformation functions to yield a special-purpose replacement. We describe and evaluate an implementation of our technique in LLVM. As a novel application of OSR, we present a feasibility study on debugging of optimized code, showing how our techniques can be used to fix variables holding incorrect values at breakpoints due to optimizations

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac

    REALISTIC CORRECT SYSTEMS IMPLEMENTATION

    Get PDF
    The present article and the forthcoming second part on Trusted Compiler Implementation\ud address correct construction and functioning of large computer based systems. In view\ud of so many annoying and dangerous system misbehaviors we ask: Can informaticians\ud righteously be accounted for incorrectness of systems, will they be able to justify systems\ud to work correctly as intended? We understand the word justification in the sense: design\ud of computer based systems, formulation of mathematical models of information flows, and\ud construction of controlling software are to be such that the expected system effects, the\ud absence of internal failures, and the robustness towards misuses and malicious external attacks\ud are foreseeable as logical consequences of the models.\ud Since more than 40 years, theoretical informatics, software engineering and compiler\ud construction have made important contributions to correct specification and also to correct\ud high-level implementation of compilers. But the third step, translation - bootstrapping - of\ud high level compiler programs to host machine code by existing host compilers, is as important.\ud So far there are no realistic recipes to close this correctness gap, although it is known\ud for some years that trust in executable code can dangerously be compromised by Trojan\ud Horses in compiler executables, even if they pass strongest tests.\ud In the present first article we will give a comprehensive motivation and develop\ud a mathematical theory in order to conscientiously prove the correctness of an initial fully\ud trusted compiler executable. The task will be modularized in three steps. The third step of\ud machine level compiler implementation verification is the topic of the forthcoming second\ud part on Trusted Compiler Implementation. It closes the implementation gap, not only for\ud compilers but also for correct software-based systems in general. Thus, the two articles together\ud give a rather confident answer to the question raised in the title

    DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization

    Full text link
    Recent research has demonstrated that Intel's SGX is vulnerable to various software-based side-channel attacks. In particular, attacks that monitor CPU caches shared between the victim enclave and untrusted software enable accurate leakage of secret enclave data. Known defenses assume developer assistance, require hardware changes, impose high overhead, or prevent only some of the known attacks. In this paper we propose data location randomization as a novel defensive approach to address the threat of side-channel attacks. Our main goal is to break the link between the cache observations by the privileged adversary and the actual data accesses by the victim. We design and implement a compiler-based tool called DR.SGX that instruments enclave code such that data locations are permuted at the granularity of cache lines. We realize the permutation with the CPU's cryptographic hardware-acceleration units providing secure randomization. To prevent correlation of repeated memory accesses we continuously re-randomize all enclave data during execution. Our solution effectively protects many (but not all) enclaves from cache attacks and provides a complementary enclave hardening technique that is especially useful against unpredictable information leakage
    corecore