23 research outputs found

    Cryptographically Secure Information Flow Control on Key-Value Stores

    Full text link
    We present Clio, an information flow control (IFC) system that transparently incorporates cryptography to enforce confidentiality and integrity policies on untrusted storage. Clio insulates developers from explicitly manipulating keys and cryptographic primitives by leveraging the policy language of the IFC system to automatically use the appropriate keys and correct cryptographic operations. We prove that Clio is secure with a novel proof technique that is based on a proof style from cryptography together with standard programming languages results. We present a prototype Clio implementation and a case study that demonstrates Clio's practicality.Comment: Full version of conference paper appearing in CCS 201

    Computational Soundness for Dalvik Bytecode

    Full text link
    Automatically analyzing information flow within Android applications that rely on cryptographic operations with their computational security guarantees imposes formidable challenges that existing approaches for understanding an app's behavior struggle to meet. These approaches do not distinguish cryptographic and non-cryptographic operations, and hence do not account for cryptographic protections: f(m) is considered sensitive for a sensitive message m irrespective of potential secrecy properties offered by a cryptographic operation f. These approaches consequently provide a safe approximation of the app's behavior, but they mistakenly classify a large fraction of apps as potentially insecure and consequently yield overly pessimistic results. In this paper, we show how cryptographic operations can be faithfully included into existing approaches for automated app analysis. To this end, we first show how cryptographic operations can be expressed as symbolic abstractions within the comprehensive Dalvik bytecode language. These abstractions are accessible to automated analysis, and they can be conveniently added to existing app analysis tools using minor changes in their semantics. Second, we show that our abstractions are faithful by providing the first computational soundness result for Dalvik bytecode, i.e., the absence of attacks against our symbolically abstracted program entails the absence of any attacks against a suitable cryptographic program realization. We cast our computational soundness result in the CoSP framework, which makes the result modular and composable.Comment: Technical report for the ACM CCS 2016 conference pape

    A Cut Principle for Information Flow

    Full text link
    We view a distributed system as a graph of active locations with unidirectional channels between them, through which they pass messages. In this context, the graph structure of a system constrains the propagation of information through it. Suppose a set of channels is a cut set between an information source and a potential sink. We prove that, if there is no disclosure from the source to the cut set, then there can be no disclosure to the sink. We introduce a new formalization of partial disclosure, called *blur operators*, and show that the same cut property is preserved for disclosure to within a blur operator. This cut-blur property also implies a compositional principle, which ensures limited disclosure for a class of systems that differ only beyond the cut.Comment: 31 page

    Types for Location and Data Security in Cloud Environments

    Get PDF
    Cloud service providers are often trusted to be genuine, the damage caused by being discovered to be attacking their own customers outweighs any benefits such attacks could reap. On the other hand, it is expected that some cloud service users may be actively malicious. In such an open system, each location may run code which has been developed independently of other locations (and which may be secret). In this paper, we present a typed language which ensures that the access restrictions put on data on a particular device will be observed by all other devices running typed code. Untyped, compromised devices can still interact with typed devices without being able to violate the policies, except in the case when a policy directly places trust in untyped locations. Importantly, our type system does not need a middleware layer or all users to register with a preexisting PKI, and it allows for devices to dynamically create new identities. The confidentiality property guaranteed by the language is defined for any kind of intruder: we consider labeled bisimilarity i.e. an attacker cannot distinguish two scenarios that differ by the change of a protected value. This shows our main result that, for a device that runs well typed code and only places trust in other well typed devices, programming errors cannot cause a data leakage.Comment: Short version to appear in Computer Security Foundations Symposium (CSF'17), August 201

    Refinement Types for Secure Implementations

    No full text
    We present the design and implementation of a typechecker for verifying security properties of the source code of cryptographic protocols and access control mechanisms. The underlying type theory is a λ-calculus equipped with refinement types for expressing pre- and post-conditions within first-order logic. We derive formal cryptographic primitives and represent active adversaries within the type theory. Well-typed programs enjoy assertion-based security properties, with respect to a realistic threat model including key compromise. The implementation amounts to an enhanced typechecker for the general purpose functional language F#; typechecking generates verification conditions that are passed to an SMT solver. We describe a series of checked examples. This is the first tool to verify authentication properties of cryptographic protocols by typechecking their source code. © 2008 IEEE

    HasTEE: Programming Trusted Execution Environments with Haskell

    Get PDF
    Trusted Execution Environments (TEEs) are hardware-enforced memory isolation units, emerging as a pivotal security solution for security-critical applications. TEEs, like Intel SGX and ARM TrustZone, allow the isolation of confidential code and data within an untrusted host environment, such as the cloud and IoT. Despite strong security guarantees, TEE adoption has been hindered by an awkward programming model. This model requires manual application partitioning and the use of error-prone, memory-unsafe, and potentially information-leaking low-level C/C++ libraries. We address the above with \textit{HasTEE}, a domain-specific language (DSL) embedded in Haskell for programming TEE applications. HasTEE includes a port of the GHC runtime for the Intel-SGX TEE. HasTEE uses Haskell's type system to automatically partition an application and to enforce \textit{Information Flow Control} on confidential data. The DSL, being embedded in Haskell, allows for the usage of higher-order functions, monads, and a restricted set of I/O operations to write any standard Haskell application. Contrary to previous work, HasTEE is lightweight, simple, and is provided as a \emph{simple security library}; thus avoiding any GHC modifications. We show the applicability of HasTEE by implementing case studies on federated learning, an encrypted password wallet, and a differentially-private data clean room.Comment: To appear in Haskell Symposium 202

    A Survey of Symbolic Methods in Computational Analysis of Cryptographic Systems

    Get PDF
    Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. For more than twenty years the two approaches have coexisted but evolved mostly independently. Recently, significant research efforts attempt to develop paradigms for cryptographic systems analysis that combines the best of both worlds. There are two broad directions that have been followed. {\em Computational soundness} aims to establish sufficient conditions under which results obtained using symbolic models imply security under computational models. The {\em direct approach} aims to apply the principles and the techniques developed in the context of symbolic models directly to computational ones. In this paper we survey existing results along both of these directions. Our goal is to provide a rather complete summary that could act as a quick reference for researchers who want to contribute to the field, want to make use of existing results, or just want to get a better picture of what results already exist

    Restricting information flow in security APIs via typing

    Get PDF
    Security APIs are designed to enable the storage and processing of confidential data without that data becoming known to individuals who are not permitted to obtain it, and are central to the operation of Automated Teller Machines (ATM) networks, Electronic Point of Sale (EPOS) terminals, set-top boxes for subscription-based TV, pre-payment utility meters, and electronic ticketing for an increasing number of public transport systems (e.g., Oyster in London). However, since the early 2000s, it has become clear that many of the security APIs in widespread use contain subtle flaws which allow malicious individuals to subvert the security restrictions and obtain confidential data that should be protected. In this thesis, we attempt to address this problem by presenting a type system in which specific security properties are guaranteed to be enforced by security APIs that are well-typed. Since type-checking is a form of static analysis, it does not suffer from the scalability issues associated with approaches that simulate interactions between a security API and one or more malicious individuals. We also show how our type system can be used to model an existing security API and provide the same guarantees of security that the API authors proved it upholds. This result follows directly from producing a well-typed implementation of the API, and demonstrates how our type system provides security guarantees without requiring additional API-specific proofs
    corecore