9,770 research outputs found

    CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code

    Full text link
    We present an instrumenting compiler for enforcing data confidentiality in low-level applications (e.g. those written in C) in the presence of an active adversary. In our approach, the programmer marks secret data by writing lightweight annotations on top-level definitions in the source code. The compiler then uses a static flow analysis coupled with efficient runtime instrumentation, a custom memory layout, and custom control-flow integrity checks to prevent data leaks even in the presence of low-level attacks. We have implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC micro-benchmarks for performance, and on larger, real-world applications (including OpenLDAP, which is around 300KLoC) for programmer overhead required to restructure the application when protecting the sensitive data such as passwords. We find that performance overheads introduced by our instrumentation are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code, appearing at EuroSys 201

    Proof-Carrying Code for Verifying Confidentiality of Mobile Code through Secure Information Flow Analysis

    Get PDF
    The growing dependence of our society and economy on networked information systems makes it essential to protect our confidential data from being leaked by malicious code. Downloading and executing code (possibly from untrusted sources) has become a daily event. Modern operating systems load code for adding new functionalities; web browsers download plug-ins and applets; end-users download untrusted code for doing some useful tasks. Certification that the untrusted code respects the confidentiality of data it manipulates is essential in these situations. Thus it is necessary to analyze how information flows within that program. This thesis presents an approach to enable end-users to determine whether untrusted mobile code will respect pre-specified confidentiality policies by statically analyzing the untrusted code for secure information flow. The approach is based on adapting of a well-known approach, proof-carrying code (PCC) to information flow security and basing the security policy of PCC on a security-type system, which enforces information flow policy, namely noninterference security policy in RISC-style assembly programs. The untrusted code is then analyzed for secure information flow based on the idea of PCC. The proofs that untrusted code does not leak confidential information are generated by the code producer and checked by the code consumer. If the proofs are valid, then the end-users (code consumer) can install and execute the untrusted mobile code safely. The proposed approach benefits from distinctive features that make it a very appropriate for security checking. First, it operates directly on object code produced by general-purpose off-the-shelf compilers. Second, it exploits the benefits that both type systems and proof-carrying code approaches offer and combines their strengths. Type systems provide an appealing option for implementing security policies, and thus represent a natural enabling technology of proof-carrying code. Meanwhile, proof-carrying code is an efficient approach for assembly code verification. Third, the explicit machine-checkable proofs serve as a certificate to distrustful users and give them more confidence in the security approach. The proposed security approach represents one point in the design space for mobile code security systems; it is well suited to typical Internet users. It enforces information flow policy with low preparation cost on the part of the code producer and no runtime overhead cost on the part of the code consumer. The security approach provides end-users with an adequate assurance of protecting the confidentiality of their confidential data

    A Verified Information-Flow Architecture

    Get PDF
    SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to dynamically control information flow in SAFE and an end-to-end proof of noninterference for this model. We use a refinement proof methodology to propagate the noninterference property of the abstract machine down to the concrete machine level. We use an intermediate layer in the refinement chain that factors out the details of the information-flow control policy and devise a code generator for compiling such information-flow policies into low-level monitor code. Finally, we verify the correctness of this generator using a dedicated Hoare logic that abstracts from low-level machine instructions into a reusable set of verified structured code generators

    THE IMPACT OF PROGRAMMING LANGUAGES ON THE SOFTWARE’S SECURITY

    Get PDF
    Security is usually defined as the ability of a system to protect itself against accidental or deliberate intrusion1. Ensuring integrity, confidentiality, availability, and accountability requirements even in the presence of a determined, malicious opponent is essential for computer security. Sensitive data has to be manipulated and consulted by authorized users only (integrity, confidentiality). Furthermore, the system should resist “denial of service” attacks that attempt to render it unusable (availability). Also the system has to ensure the inability to deny the ownership of prior actions (accountability).security

    Confidentiality and Integrity with Untrusted Hosts: Technical Report

    Get PDF
    Several security-typed languages have recently been proposed to enforce security properties such as confidentiality or integrity by type checking. We propose a new security-typed language, SPL@, that addresses two important limitations of previous approaches. First, existing languages assume that the underlying execution platform is trusted; this assumption does not scale to distributed computation in which a variety of differently trusted hosts are available to execute programs. Our new approach, secure program partitioning, translates programs written assuming complete trust in a single executing host into programs that execute using a collection of variously trusted hosts to perform computation. As the trust configuration of a distributed system evolves, this translation can be performed as necessary for security. Second, many common program transformations do not work in existing security-typed languages; although they produce equivalent programs, these programs are rejected because of apparent information flows. SPL@ uses a novel mechanism based on ordered linear continuations to permit a richer class of program transformations, including secure program partitioning. This report is the technical companion to [ZM00]. It contains expanded discussion and extensive proofs of both the soundness and noninterference theorems mentioned in Section 3.3 of that work

    Isolation Without Taxation: {N}ear-Zero-Cost Transitions for {WebAssembly} and {SFI}

    Get PDF
    Software sandboxing or software-based fault isolation (SFI) is a lightweight approach to building secure systems out of untrusted components. Mozilla, for example, uses SFI to harden the Firefox browser by sandboxing third-party libraries, and companies like Fastly and Cloudflare use SFI to safely co-locate untrusted tenants on their edge clouds. While there have been significant efforts to optimize and verify SFI enforcement, context switching in SFI systems remains largely unexplored: almost all SFI systems use \emph{heavyweight transitions} that are not only error-prone but incur significant performance overhead from saving, clearing, and restoring registers when context switching. We identify a set of \emph{zero-cost conditions} that characterize when sandboxed code has sufficient structured to guarantee security via lightweight \emph{zero-cost} transitions (simple function calls). We modify the Lucet Wasm compiler and its runtime to use zero-cost transitions, eliminating the undue performance tax on systems that rely on Lucet for sandboxing (e.g., we speed up image and font rendering in Firefox by up to 29.7\% and 10\% respectively). To remove the Lucet compiler and its correct implementation of the Wasm specification from the trusted computing base, we (1) develop a \emph{static binary verifier}, VeriZero, which (in seconds) checks that binaries produced by Lucet satisfy our zero-cost conditions, and (2) prove the soundness of VeriZero by developing a logical relation that captures when a compiled Wasm function is semantically well-behaved with respect to our zero-cost conditions. Finally, we show that our model is useful beyond Wasm by describing a new, purpose-built SFI system, SegmentZero32, that uses x86 segmentation and LLVM with mostly off-the-shelf passes to enforce our zero-cost conditions; our prototype performs on-par with the state-of-the-art Native Client SFI system

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Types for Information Flow Control: Labeling Granularity and Semantic Models

    Full text link
    Language-based information flow control (IFC) tracks dependencies within a program using sensitivity labels and prohibits public outputs from depending on secret inputs. In particular, literature has proposed several type systems for tracking these dependencies. On one extreme, there are fine-grained type systems (like Flow Caml) that label all values individually and track dependence at the level of individual values. On the other extreme are coarse-grained type systems (like HLIO) that track dependence coarsely, by associating a single label with an entire computation context and not labeling all values individually. In this paper, we show that, despite their glaring differences, both these styles are, in fact, equally expressive. To do this, we show a semantics- and type-preserving translation from a coarse-grained type system to a fine-grained one and vice-versa. The forward translation isn't surprising, but the backward translation is: It requires a construct to arbitrarily limit the scope of a context label in the coarse-grained type system (e.g., HLIO's "toLabeled" construct). As a separate contribution, we show how to extend work on logical relation models of IFC types to higher-order state. We build such logical relations for both the fine-grained type system and the coarse-grained type system. We use these relations to prove the two type systems and our translations between them sound.Comment: 31st IEEE Symposium on Computer Security Foundations (CSF 2018
    corecore