1,165 research outputs found

    Information Security as Strategic (In)effectivity

    Full text link
    Security of information flow is commonly understood as preventing any information leakage, regardless of how grave or harmless consequences the leakage can have. In this work, we suggest that information security is not a goal in itself, but rather a means of preventing potential attackers from compromising the correct behavior of the system. To formalize this, we first show how two information flows can be compared by looking at the adversary's ability to harm the system. Then, we propose that the information flow in a system is effectively information-secure if it does not allow for more harm than its idealized variant based on the classical notion of noninterference

    A Verified Information-Flow Architecture

    Get PDF
    SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to dynamically control information flow in SAFE and an end-to-end proof of noninterference for this model. We use a refinement proof methodology to propagate the noninterference property of the abstract machine down to the concrete machine level. We use an intermediate layer in the refinement chain that factors out the details of the information-flow control policy and devise a code generator for compiling such information-flow policies into low-level monitor code. Finally, we verify the correctness of this generator using a dedicated Hoare logic that abstracts from low-level machine instructions into a reusable set of verified structured code generators

    Very static enforcement of dynamic policies

    Get PDF
    Security policies are naturally dynamic. Reflecting this, there has been a growing interest in studying information-flow properties which change during program execution, including concepts such as declassification, revocation, and role-change. A static verification of a dynamic information flow policy, from a semantic perspective, should only need to concern itself with two things: 1) the dependencies between data in a program, and 2) whether those dependencies are consistent with the intended flow policies as they change over time. In this paper we provide a formal ground for this intuition. We present a straightforward extension to the principal flow-sensitive type system introduced by Hunt and Sands (POPL’06, ESOP’11) to infer both end-to-end dependencies and dependencies at intermediate points in a program. This allows typings to be applied to verification of both static and dynamic policies. Our extension preserves the principal type system’s distinguishing feature, that type inference is independent of the policy to be enforced: a single, generic dependency analysis (typing) can be used to verify many different dynamic policies of a given program, thus achieving a clean separation between (1) and (2). We also make contributions to the foundations of dynamic information flow. Arguably, the most compelling semantic definitions for dynamic security conditions in the literature are phrased in the so-called knowledge-based style. We contribute a new definition of knowledge-based progress insensitive security for dynamic policies. We show that the new definition avoids anomalies of previous definitions and enjoys a simple and useful characterisation as a two-run style property

    Formal Verification of Differential Privacy for Interactive Systems

    Full text link
    Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.Comment: 65 pages with 1 figur

    Automating quantitative information flow

    Get PDF
    PhDUnprecedented quantities of personal and business data are collected, stored, shared, and processed by countless institutions all over the world. Prominent examples include sharing personal data on social networking sites, storing credit card details in every store, tracking customer preferences of supermarket chains, and storing key personal data on biometric passports. Confidentiality issues naturally arise from this global data growth. There are continously reports about how private data is leaked from confidential sources where the implications of the leaks range from embarrassment to serious personal privacy and business damages. This dissertation addresses the problem of automatically quantifying the amount of leaked information in programs. It presents multiple program analysis techniques of different degrees of automation and scalability. The contributions of this thesis are two fold: a theoretical result and two different methods for inferring and checking quantitative information flows are presented. The theoretical result relates the amount of possible leakage under any probability distribution back to the order relation in Landauer and Redmond’s lattice of partitions [35]. The practical results are split in two analyses: a first analysis precisely infers the information leakage using SAT solving and model counting; a second analysis defines quantitative policies which are reduced to checking a k-safety problem. A novel feature allows reasoning independent of the secret space. The presented tools are applied to real, existing leakage vulnerabilities in operating system code. This has to be understood and weighted within the context of the information flow literature which suffers under an apparent lack of practical examples and applications. This thesis studies such “real leaks” which could influence future strategies for finding information leaks

    The Anatomy and Facets of Dynamic Policies

    Full text link
    Information flow policies are often dynamic; the security concerns of a program will typically change during execution to reflect security-relevant events. A key challenge is how to best specify, and give proper meaning to, such dynamic policies. A large number of approaches exist that tackle that challenge, each yielding some important, but unconnected, insight. In this work we synthesise existing knowledge on dynamic policies, with an aim to establish a common terminology, best practices, and frameworks for reasoning about them. We introduce the concept of facets to illuminate subtleties in the semantics of policies, and closely examine the anatomy of policies and the expressiveness of policy specification mechanisms. We further explore the relation between dynamic policies and the concept of declassification.Comment: Technical Report of publication under the same name in Computer Security Foundations (CSF) 201

    Settlement risk in large-value payments systems

    Get PDF
    The phenomenal growth of financial market and trading activities worldwide has led to tremendous growth in large-value payments systems. Large-value payments systems are the electronic banks used to transfer large payments among themselves. Payment orders processed in such systems in the United States, for example, are typically well above $1 million. ; The tremendous growth of payments system use throughout the world has increased both the possibility of settlement failures and the potential impact of such failures. In 1996, the average turnover in a single day exceeded the combined capital of the top 100 U.S. banks. Regulators are especially concerned that payments systems might turn a local financial crisis into a global systemic crisis. Shen examines settlement risk in large-value payments systems and discusses some of the measures available to manage such risk.Risk ; Payment systems ; Electronic funds transfers
    • …
    corecore