40 research outputs found
Declassification of Faceted Values in JavaScript
This research addresses the issues with protecting sensitive information at the language level using information flow control mechanisms (IFC). Most of the IFC mechanisms face the challenge of releasing sensitive information in a restricted or limited manner. This research uses faceted values, an IFC mechanism that has shown promising flexibility for downgrading the confidential information in a secure manner, also called declassification.
In this project, we introduce the concept of first-class labels to simplify the declassification of faceted values. To validate the utility of our approach we show how the combination of faceted values and first-class labels can build various declassification mechanisms
Deductive Verification of Cryptographic Software
We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation
Information Security as Strategic (In)effectivity
Security of information flow is commonly understood as preventing any
information leakage, regardless of how grave or harmless consequences the
leakage can have. In this work, we suggest that information security is not a
goal in itself, but rather a means of preventing potential attackers from
compromising the correct behavior of the system. To formalize this, we first
show how two information flows can be compared by looking at the adversary's
ability to harm the system. Then, we propose that the information flow in a
system is effectively information-secure if it does not allow for more harm
than its idealized variant based on the classical notion of noninterference
Output-sensitive Information flow analysis
Part 1: Full PapersInternational audienceConstant-time programming is a countermeasure to prevent cache based attacks where programs should not perform memory accesses that depend on secrets. In some cases this policy can be safely relaxed if one can prove that the program does not leak more information than the public outputs of the computation. We propose a novel approach for verifying constant-time programming based on a new information flow property, called output-sensitive non-interference. Noninterference states that a public observer cannot learn anything about the private data. Since real systems need to intentionally declassify some information, this property is too strong in practice. In order to take into account public outputs we proceed as follows: instead of using complex explicit declassification policies, we partition variables in three sets: input, output and leakage variables. Then, we propose a typing system to statically check that leakage variables do not leak more information about the secret inputs than the public normal output. The novelty of our approach is that we track the dependence of leakage variables with respect not only to the initial values of input variables (as in classical approaches for noninterference), but taking also into account the final values of output variables. We adapted this approach to LLVM IR and we developed a prototype to verify LLVM implementations
Existential Types for Relaxed Noninterference
Information-flow security type systems ensure confidentiality by enforcing
noninterference: a program cannot leak private data to public channels.
However, in practice, programs need to selectively declassify information about
private data. Several approaches have provided a notion of relaxed
noninterference supporting selective and expressive declassification while
retaining a formal security property. The labels-as-functions approach provides
relaxed noninterference by means of declassification policies expressed as
functions. The labels-as-types approach expresses declassification policies
using type abstraction and faceted types, a pair of types representing the
secret and public facets of values. The original proposal of labels-as-types is
formulated in an object-oriented setting where type abstraction is realized by
subtyping. The object-oriented approach however suffers from limitations due to
its receiver-centric paradigm.
In this work, we consider an alternative approach to labels-as-types,
applicable in non-object-oriented languages, which allows us to express
advanced declassification policies, such as extrinsic policies, based on a
different form of type abstraction: existential types. An existential type
exposes abstract types and operations on these; we leverage this abstraction
mechanism to express secrets that can be declassified using the provided
operations. We formalize the approach in a core functional calculus with
existential types, define existential relaxed noninterference, and prove that
well-typed programs satisfy this form of type-based relaxed noninterference
Understanding and Enforcing Opacity
Abstract—This paper puts a spotlight on the specification and enforcement of opacity, a security policy for protecting sensitive properties of system behavior. We illustrate the fine granularity of the opacity policy by location privacy and privacy-preserving aggregation scenarios. We present a frame-work for opacity and explore its key differences and formal connections with such well-known information-flow models as noninterference, knowledge-based security, and declassifica-tion. Our results are machine-checked and parameterized in the observational power of the attacker, including progress-insensitive, progress-sensitive, and timing-sensitive attackers. We present two approaches to enforcing opacity: a whitebox monitor and a blackbox sampling-based enforcement. We report on experiments with prototypes that utilize state-of-the-art Satisfiability Modulo Theories (SMT) solvers and the random testing tool QuickCheck to establish opacity for the location and aggregation-based scenarios. I
Type Abstraction for Relaxed Noninterference
Information-flow security typing statically prevents confidential information to leak to public channels. The fundamental information flow property, known as noninterference, states that a public observer cannot learn anything from private data. As attractive as it is from a theoretical viewpoint, noninterference is impractical: real systems need to intentionally declassify some information, selectively. Among the different information flow approaches to declassification, a particularly expressive approach was proposed by Li and Zdancewic, enforcing a notion of relaxed noninterference by allowing programmers to specify declassification policies that capture the intended manner in which public information can be computed from private data.
This paper shows how we can exploit the familiar notion of type abstraction to support expressive declassification policies in a simpler, yet more expressive manner. In particular, the type-based approach to declassification---which we develop in an object-oriented setting---addresses several issues and challenges with respect to prior work, including a simple notion of label ordering based on subtyping, support for recursive declassification policies, and a local, modular reasoning principle for relaxed noninterference. This work paves the way for integrating declassification policies in practical security-typed languages