1,009 research outputs found
Shai: Enforcing Data-Specific Policies with Near-Zero Runtime Overhead
Data retrieval systems such as online search engines and online social
networks must comply with the privacy policies of personal and selectively
shared data items, regulatory policies regarding data retention and censorship,
and the provider's own policies regarding data use. Enforcing these policies is
difficult and error-prone. Systematic techniques to enforce policies are either
limited to type-based policies that apply uniformly to all data of the same
type, or incur significant runtime overhead.
This paper presents Shai, the first system that systematically enforces
data-specific policies with near-zero overhead in the common case. Shai's key
idea is to push as many policy checks as possible to an offline, ahead-of-time
analysis phase, often relying on predicted values of runtime parameters such as
the state of access control lists or connected users' attributes. Runtime
interception is used sparingly, only to verify these predictions and to make
any remaining policy checks. Our prototype implementation relies on efficient,
modern OS primitives for sandboxing and isolation. We present the design of
Shai and quantify its overheads on an experimental data indexing and search
pipeline based on the popular search engine Apache Lucene
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
A Build System for Benchmarking and Comparison of Competing System Implementations
When developing a hardware or software system, the problem at hand may lend itself to multiple solutions. During the implementation process for such systems, it can be helpful to prototype multiple versions that use distinct paradigms, and determine the efficiency of each according to some metric, such as execution time. This paper presents a portable, lightweight build system designed for easy benchmarking and verification of competing implementations of an algorithm. Also presented is a sample project that uses this system to compare the performance and correctness of CPU, GPU, and FPGA implementations of a signal recovery algorith
ISML: an interface specification meta-language
In this paper we present an abstract metaphor model situated within a model-based user interface framework. The inclusion of metaphors in graphical user interfaces is a well established, but mostly craft-based strategy to design. A substantial body of notations and tools can be found within the model-based user interface design literature, however an explicit treatment of metaphor and its mappings to other design views has yet to be addressed. We introduce the Interface Specification Meta-Language (ISML) framework and demonstrate its use in comparing the semantic and syntactic features of an interactive system. Challenges facing this research are outlined and further work proposed
Invariant discovery and refinement plans for formal modelling in Event-B
The continuous growth of complex systems makes the development of correct software
increasingly challenging. In order to address this challenge, formal methods o er rigorous
mathematical techniques to model and verify the correctness of systems. Refinement
is one of these techniques. By allowing a developer to incrementally introduce design
details, refinement provides a powerful mechanism for mastering the complexities that
arise when formally modelling systems. Here the focus is on a posit-and-prove style of
refinement, where a design is developed as a series of abstract models introduced via
refinement steps. Each refinement step generates proof obligations which must be discharged
in order to verify its correctness – typically requiring a user to understand the
relationship between modelling and reasoning.
This thesis focuses on techniques to aid refinement-based formal modelling, specifically,
when a user requires guidance in order to overcome a failed refinement step. An integrated
approach has been followed: combining the complementary strengths of bottomup
theory formation, in which theories about domains are built based on basic background
information; and top-down planning, in which meta-level reasoning is used to guide the
search for correct models.
On the theory formation perspective, we developed a technique for the automatic discovery
of invariants. Refinement requires the definition of properties, called invariants,
which relate to the design. Formulating correct and meaningful invariants can be tedious
and a challenging task. A heuristic approach to the automatic discovery of invariants has
been developed building upon simulation, proof-failure analysis and automated theory
formation. This approach exploits the close interplay between modelling and reasoning
in order to provide systematic guidance in tailoring the search for invariants for a given
model.
On the planning perspective, we propose a new technique called refinement plans.
Refinement plans provide a basis for automatically generating modelling guidance when
a step fails but is close to a known pattern of refinement. This technique combines both
modelling and reasoning knowledge, and, contrary to traditional pattern techniques, allow
the analysis of failure and partial matching. Moreover, when the guidance is only partially
instantiated, and it is suitable, refinement plans provide specialised knowledge to further
tailor the theory formation process in an attempt to fully instantiate the guidance.
We also report on a series of experiments undertaken in order to evaluate the approaches
and on the implementation of both techniques into prototype tools. We believe
the techniques presented here allow the developer to focus on design decisions rather than
on analysing low-level proof failures
Comprehensive and Practical Policy Compliance in Data Retrieval Systems
Data retrieval systems such as online search engines and online social networks process many data items coming from different sources, each subject to its own data use policy. Ensuring compliance with these policies in a large and fast-evolving system presents a significant technical challenge since bugs, misconfigurations, or operator errors can cause (accidental) policy violations. To prevent such violations, researchers and practitioners develop policy compliance systems. Existing policy compliance systems, however, are either not comprehensive or not practical. To be comprehensive, a compliance system must be able to enforce users' policies regarding their personal privacy preferences, the service provider's own policies regarding data use such as auditing and personalization, and regulatory policies such as data retention and censorship. To be practical, a compliance system needs to meet stringent requirements: (1) runtime overhead must be low; (2) existing applications must run with few modifications; and (3) bugs, misconfigurations, or actions by unprivileged operators must not cause policy violations. In this thesis, we present the design and implementation of two comprehensive and practical compliance systems: Thoth and Shai. Thoth relies on pure runtime monitoring: it tracks data flows by intercepting processes' I/O, and then it checks the associated policies to allow only policy-compliant flows at runtime. Shai, on the other hand, combines offline analysis and light-weight runtime monitoring: it pushes as many policy checks as possible to an offline (flow) analysis by predicting the policies that data-handling processes will be subject to at runtime, and then it compiles those policies into a set of fine-grained I/O capabilities that can be enforced directly by the underlying operating system
- …