76 research outputs found

    LIBQIF: a quantitative information flow C++ toolkit library

    Get PDF
    A fundamental concern in computer security is to control information ow, whether to protect con dential information from being leaked, or to protect trusted information from being tainted. A classic approach is to try to enforce non-interference. Unfortunately, achieving non-interference is often not possible, because often there is a correlation between secrets and observables, either by design or due to some physical feature of the computation (side channels). One promising approach to relaxing noninterference, is to develop a quantitative theory of information ow that allows us to reason about how much information is being leaked, thus paving the way to the possibility of tolerating small leaks. In this work, we aim at developing a quantitative information ow C++ toolkit library, implementing several algorithms from the areas of QIF (more speci cally from four theories: Shannon Entropy, Min-Entropy, Guessing Entropy and G-Leakage) and Di erential Privacy. The library can be used by academics to facilitate research in these areas, as well as by students as a learning tool. A primary use of the library is to compute QIF measures as well as to generate plots, useful for understanding their behavior. Moreover, the library allows users to compute optimal di erentially private mechanisms, compare the utility of known mechanisms, compare the leakage of channels, compute gain functions that separate channels, and various other functionalities related to QIF.Trabajo final de carreraSociedad Argentina de Informática e Investigación Operativa (SADIO

    LIBQIF: a quantitative information flow C++ toolkit library

    Get PDF
    A fundamental concern in computer security is to control information ow, whether to protect con dential information from being leaked, or to protect trusted information from being tainted. A classic approach is to try to enforce non-interference. Unfortunately, achieving non-interference is often not possible, because often there is a correlation between secrets and observables, either by design or due to some physical feature of the computation (side channels). One promising approach to relaxing noninterference, is to develop a quantitative theory of information ow that allows us to reason about how much information is being leaked, thus paving the way to the possibility of tolerating small leaks. In this work, we aim at developing a quantitative information ow C++ toolkit library, implementing several algorithms from the areas of QIF (more speci cally from four theories: Shannon Entropy, Min-Entropy, Guessing Entropy and G-Leakage) and Di erential Privacy. The library can be used by academics to facilitate research in these areas, as well as by students as a learning tool. A primary use of the library is to compute QIF measures as well as to generate plots, useful for understanding their behavior. Moreover, the library allows users to compute optimal di erentially private mechanisms, compare the utility of known mechanisms, compare the leakage of channels, compute gain functions that separate channels, and various other functionalities related to QIF.Trabajo final de carreraSociedad Argentina de Informática e Investigación Operativa (SADIO

    Cautiously Optimistic Program Analyses for Secure and Reliable Software

    Full text link
    Modern computer systems still have various security and reliability vulnerabilities. Well-known dynamic analyses solutions can mitigate them using runtime monitors that serve as lifeguards. But the additional work in enforcing these security and safety properties incurs exorbitant performance costs, and such tools are rarely used in practice. Our work addresses this problem by constructing a novel technique- Cautiously Optimistic Program Analysis (COPA). COPA is optimistic- it infers likely program invariants from dynamic observations, and assumes them in its static reasoning to precisely identify and elide wasteful runtime monitors. The resulting system is fast, but also ensures soundness by recovering to a conservatively optimized analysis when a likely invariant rarely fails at runtime. COPA is also cautious- by carefully restricting optimizations to only safe elisions, the recovery is greatly simplified. It avoids unbounded rollbacks upon recovery, thereby enabling analysis for live production software. We demonstrate the effectiveness of Cautiously Optimistic Program Analyses in three areas: Information-Flow Tracking (IFT) can help prevent security breaches and information leaks. But they are rarely used in practice due to their high performance overhead (>500% for web/email servers). COPA dramatically reduces this cost by eliding wasteful IFT monitors to make it practical (9% overhead, 4x speedup). Automatic Garbage Collection (GC) in managed languages (e.g. Java) simplifies programming tasks while ensuring memory safety. However, there is no correct GC for weakly-typed languages (e.g. C/C++), and manual memory management is prone to errors that have been exploited in high profile attacks. We develop the first sound GC for C/C++, and use COPA to optimize its performance (16% overhead). Sequential Consistency (SC) provides intuitive semantics to concurrent programs that simplifies reasoning for their correctness. However, ensuring SC behavior on commodity hardware remains expensive. We use COPA to ensure SC for Java at the language-level efficiently, and significantly reduce its cost (from 24% down to 5% on x86). COPA provides a way to realize strong software security, reliability and semantic guarantees at practical costs.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170027/1/subarno_1.pd

    Match It or Die: Proving Integrity by Equality

    Get PDF
    Cryptographic hash functions are commonly used as modification detection codes. The goal is to provide message integrity assurance by comparing the digest of the original message with the hash of what is thought to be the intended message. This paper generalizes this idea by applying it to general expressions instead of just digests: success of an equality test between a tainted data and a trusted one can be seen as a proof of high-integrity for the first item. Secure usage of hash functions is also studied with respect to the confidentiality of digests by extending secret-sensitive noninterference of Demange and Sands

    LIBQIF: a quantitative information flow C++ toolkit library

    Get PDF
    A fundamental concern in computer security is to control information ow, whether to protect con dential information from being leaked, or to protect trusted information from being tainted. A classic approach is to try to enforce non-interference. Unfortunately, achieving non-interference is often not possible, because often there is a correlation between secrets and observables, either by design or due to some physical feature of the computation (side channels). One promising approach to relaxing noninterference, is to develop a quantitative theory of information ow that allows us to reason about how much information is being leaked, thus paving the way to the possibility of tolerating small leaks. In this work, we aim at developing a quantitative information ow C++ toolkit library, implementing several algorithms from the areas of QIF (more speci cally from four theories: Shannon Entropy, Min-Entropy, Guessing Entropy and G-Leakage) and Di erential Privacy. The library can be used by academics to facilitate research in these areas, as well as by students as a learning tool. A primary use of the library is to compute QIF measures as well as to generate plots, useful for understanding their behavior. Moreover, the library allows users to compute optimal di erentially private mechanisms, compare the utility of known mechanisms, compare the leakage of channels, compute gain functions that separate channels, and various other functionalities related to QIF.Trabajo final de carreraSociedad Argentina de Informática e Investigación Operativa (SADIO

    Quantitative information-flow tracking for real systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 99-105).An information-flow security policy constrains a computer system's end-to-end use of information, even as it is transformed in computation. For instance, a policy would not just restrict what secret data could be revealed directly, but restrict any output that might allow inferences about the secret. Expressing such a policy quantitatively, in terms of a specific number of bits of information, is often an effective program independent way of distinguishing what scenarios should be allowed and disallowed. This thesis describes a family of new techniques for measuring how much information about a program's secret inputs is revealed by its public outputs on a particular execution, in order to check a quantitative policy on realistic systems. Our approach builds on dynamic tainting, tracking at runtime which bits might contain secret in formation, and also uses static control-flow regions to soundly account for implicit flows via branches and pointer operations. We introduce a new graph model that bounds information flow by the maximum flow between inputs and outputs in a flow network representation of an execution. The flow bounds obtained with maximum flow are much more precise than those based on tainting alone (which is equivalent to graph reachability). The bounds are a conservative estimate of channel capacity: the amount of information that could be transmitted by an adversary making an arbitrary choice of secret inputs. We describe an implementation named Flowcheck, built using the Valgrind framework for x86/Linux binaries, and use it to perform case studies on six real C, C++, and Objective C programs, three of which have more than 250,000 lines of code. We used the tool to check the confidentiality of a different kind of information appropriate to each program. Its results either verified that the information was appropriately kept secret on the examined executions, or revealed unacceptable leaks, in one case due to a previously unknown bug.by Stephen Andrew McCamant.Ph.D

    Quantifying leakage in the presence of unreliable sources of information

    Get PDF
    Belief and min-entropy leakage are two well-known approaches to quantify information flow in security systems. Both concepts stand as alternatives to the traditional approaches founded on Shannon entropy and mutual information, which were shown to provide inadequate security guarantees. In this paper we unify the two concepts in one model so as to cope with the frequent (potentially inaccurate, misleading or outdated) attackers’ side information about individuals on social networks, online forums, blogs and other forms of online communication and information sharing. To this end we propose a new metric based on min-entropy that takes into account the adversary’s beliefs

    Secure Information Flow via Stripping and Fast Simulation

    Get PDF
    Type systems for secure information flow aim to prevent a program from leaking information from H (high) to L (low) variables. Traditionally, bisimulation has been the prevalent technique for proving the soundness of such systems. This work intro- duces a new proof technique based on stripping and fast simulation, and shows that it can be applied in a number of cases where bisimulation fails. We present a progressive development of this technique over a representative sample of languages includ- ing a simple imperative language (core theory), a multiprocessing nondeterministic language, a probabilistic language, and a language with cryptographic primitives. In the core theory we illustrate the key concepts of this technique in a basic setting. A fast low simulation in the context of transition systems is a binary relation where simulating states can match the moves of simulated states while maintaining the equivalence of low variables; stripping is a function that removes high commands from programs. We show that we can prove secure information flow by arguing that the stripping relation is a fast low simulation. We then extend the core theory to an abstract distributed language under a nondeterministic scheduler. Next, we extend to a probabilistic language with a random assignment command; we generalize fast simulation to the setting of discrete time Markov Chains, and prove approximate probabilistic noninterference. Finally, we introduce cryptographic primitives into the probabilistic language and prove computational noninterference, provided that the underling encryption scheme is secure

    Fine-grained reasoning about the security and usability trade-off in modern security tools

    Get PDF
    Defense techniques detect or prevent attacks based on their ability to model the attacks. A balance between security and usability should always be established in any kind of defense technique. Attacks that exploit the weak points in security tools are very powerful and thus can go undetected. One source of those weak points in security tools comes when security is compromised for usability reasons, where if a security tool completely secures a system against attacks the whole system will not be usable because of the large false alarms or the very restricted policies it will create, or if the security tool decides not to secure a system against certain attacks, those attacks will simply and easily succeed. The key contribution of this dissertation is that it digs deeply into modern security tools and reasons about the inherent security and usability trade-offs based on identifying the low-level, contributing factors to known issues. This is accomplished by implementing full systems and then testing those systems in realistic scenarios. The thesis that this dissertation tests is that we can reason about security and usability trade-offs in fine-grained ways by building and testing full systems. Furthermore, this dissertation provides practical solutions and suggestions to reach a good balance between security and usability. We study two modern security tools, Dynamic Information Flow Tracking (DIFT) and Antivirus (AV) software, for their importance and wide usage. DIFT is a powerful technique that is used in various aspects of security systems. It works by tagging certain inputs and propagating the tags along with the inputs in the target system. However, current DIFT systems do not track implicit information flow because if all DIFT propagation rules are directly applied in a conservative way, the target system will be full of tagged data (a problem called overtagging) and thus useless because the tags tell us very little about the actual information flow of the system. So, current DIFT systems drop some security for usability. In this dissertation, we reason about the sources of the overtagging problem and provide practical ways to deal with it, while previous approaches have focused on abstract descriptions of the main causes of the problem based on limited experiments. The second security tool we consider in this dissertation is antivirus (AV) software. AV is a very important tool that protects systems against worms and viruses by scanning data against a database of signatures. Despite its importance and wide usage, AV has received little attention from the security research community. In this dissertation, we examine the AV internals and reason about the possibility of creating timing channel attacks against AV software. The attacker could infer information about the AV based only on the scanning time the AV spends to scan benign inputs. The other aspect of AV this dissertation explores is the low-level AV performance impact on systems. Even though the performance overhead of AV is a well known issue, the exact reasons behind this overhead are not well-studied. In this dissertation, we design a methodology that utilizes Event Tracing for Windows technology (ETW), a technology that accounts for all OS events, to reason about AV performance impact from the OS point of view. We show that the main performance impact of the AV on a task is the longer waiting time the task spends waiting on events
    • …
    corecore