1,328 research outputs found
A Logical Method for Policy Enforcement over Evolving Audit Logs
We present an iterative algorithm for enforcing policies represented in a
first-order logic, which can, in particular, express all transmission-related
clauses in the HIPAA Privacy Rule. The logic has three features that raise
challenges for enforcement --- uninterpreted predicates (used to model
subjective concepts in privacy policies), real-time temporal properties, and
quantification over infinite domains (such as the set of messages containing
personal information). The algorithm operates over audit logs that are
inherently incomplete and evolve over time. In each iteration, the algorithm
provably checks as much of the policy as possible over the current log and
outputs a residual policy that can only be checked when the log is extended
with additional information. We prove correctness and termination properties of
the algorithm. While these results are developed in a general form, accounting
for many different sources of incompleteness in audit logs, we also prove that
for the special case of logs that maintain a complete record of all relevant
actions, the algorithm effectively enforces all safety and co-safety
properties. The algorithm can significantly help automate enforcement of
policies derived from the HIPAA Privacy Rule.Comment: Carnegie Mellon University CyLab Technical Report. 51 page
Shai: Enforcing Data-Specific Policies with Near-Zero Runtime Overhead
Data retrieval systems such as online search engines and online social
networks must comply with the privacy policies of personal and selectively
shared data items, regulatory policies regarding data retention and censorship,
and the provider's own policies regarding data use. Enforcing these policies is
difficult and error-prone. Systematic techniques to enforce policies are either
limited to type-based policies that apply uniformly to all data of the same
type, or incur significant runtime overhead.
This paper presents Shai, the first system that systematically enforces
data-specific policies with near-zero overhead in the common case. Shai's key
idea is to push as many policy checks as possible to an offline, ahead-of-time
analysis phase, often relying on predicted values of runtime parameters such as
the state of access control lists or connected users' attributes. Runtime
interception is used sparingly, only to verify these predictions and to make
any remaining policy checks. Our prototype implementation relies on efficient,
modern OS primitives for sandboxing and isolation. We present the design of
Shai and quantify its overheads on an experimental data indexing and search
pipeline based on the popular search engine Apache Lucene
Generalizing Permissive-Upgrade in Dynamic Information Flow Analysis
Preventing implicit information flows by dynamic program analysis requires
coarse approximations that result in false positives, because a dynamic monitor
sees only the executed trace of the program. One widely deployed method is the
no-sensitive-upgrade check, which terminates a program whenever a variable's
taint is upgraded (made more sensitive) due to a control dependence on tainted
data. Although sound, this method is restrictive, e.g., it terminates the
program even if the upgraded variable is never used subsequently. To counter
this, Austin and Flanagan introduced the permissive-upgrade check, which allows
a variable upgrade due to control dependence, but marks the variable
"partially-leaked". The program is stopped later if it tries to use the
partially-leaked variable. Permissive-upgrade handles the dead-variable
assignment problem and remains sound. However, Austin and Flanagan develop
permissive-upgrade only for a two-point (low-high) security lattice and
indicate a generalization to pointwise products of such lattices. In this
paper, we develop a non-trivial and non-obvious generalization of
permissive-upgrade to arbitrary lattices. The key difficulty lies in finding a
suitable notion of partial leaks that is both sound and permissive and in
developing a suitable definition of memory equivalence that allows an inductive
proof of soundness
Information Flow Control in WebKit's JavaScript Bytecode
Websites today routinely combine JavaScript from multiple sources, both
trusted and untrusted. Hence, JavaScript security is of paramount importance. A
specific interesting problem is information flow control (IFC) for JavaScript.
In this paper, we develop, formalize and implement a dynamic IFC mechanism for
the JavaScript engine of a production Web browser (specifically, Safari's
WebKit engine). Our IFC mechanism works at the level of JavaScript bytecode and
hence leverages years of industrial effort on optimizing both the source to
bytecode compiler and the bytecode interpreter. We track both explicit and
implicit flows and observe only moderate overhead. Working with bytecode
results in new challenges including the extensive use of unstructured control
flow in bytecode (which complicates lowering of program context taints),
unstructured exceptions (which complicate the matter further) and the need to
make IFC analysis permissive. We explain how we address these challenges,
formally model the JavaScript bytecode semantics and our instrumentation, prove
the standard property of termination-insensitive non-interference, and present
experimental results on an optimized prototype
Types for Information Flow Control: Labeling Granularity and Semantic Models
Language-based information flow control (IFC) tracks dependencies within a
program using sensitivity labels and prohibits public outputs from depending on
secret inputs. In particular, literature has proposed several type systems for
tracking these dependencies. On one extreme, there are fine-grained type
systems (like Flow Caml) that label all values individually and track
dependence at the level of individual values. On the other extreme are
coarse-grained type systems (like HLIO) that track dependence coarsely, by
associating a single label with an entire computation context and not labeling
all values individually.
In this paper, we show that, despite their glaring differences, both these
styles are, in fact, equally expressive. To do this, we show a semantics- and
type-preserving translation from a coarse-grained type system to a fine-grained
one and vice-versa. The forward translation isn't surprising, but the backward
translation is: It requires a construct to arbitrarily limit the scope of a
context label in the coarse-grained type system (e.g., HLIO's "toLabeled"
construct). As a separate contribution, we show how to extend work on logical
relation models of IFC types to higher-order state. We build such logical
relations for both the fine-grained type system and the coarse-grained type
system. We use these relations to prove the two type systems and our
translations between them sound.Comment: 31st IEEE Symposium on Computer Security Foundations (CSF 2018
- …