11,678 research outputs found
Equivalence-based Security for Querying Encrypted Databases: Theory and Application to Privacy Policy Audits
Motivated by the problem of simultaneously preserving confidentiality and
usability of data outsourced to third-party clouds, we present two different
database encryption schemes that largely hide data but reveal enough
information to support a wide-range of relational queries. We provide a
security definition for database encryption that captures confidentiality based
on a notion of equivalence of databases from the adversary's perspective. As a
specific application, we adapt an existing algorithm for finding violations of
privacy policies to run on logs encrypted under our schemes and observe low to
moderate overheads.Comment: CCS 2015 paper technical report, in progres
JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction
Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally
Explicit linear kernels via dynamic programming
Several algorithmic meta-theorems on kernelization have appeared in the last
years, starting with the result of Bodlaender et al. [FOCS 2009] on graphs of
bounded genus, then generalized by Fomin et al. [SODA 2010] to graphs excluding
a fixed minor, and by Kim et al. [ICALP 2013] to graphs excluding a fixed
topological minor. Typically, these results guarantee the existence of linear
or polynomial kernels on sparse graph classes for problems satisfying some
generic conditions but, mainly due to their generality, it is not clear how to
derive from them constructive kernels with explicit constants. In this paper we
make a step toward a fully constructive meta-kernelization theory on sparse
graphs. Our approach is based on a more explicit protrusion replacement
machinery that, instead of expressibility in CMSO logic, uses dynamic
programming, which allows us to find an explicit upper bound on the size of the
derived kernels. We demonstrate the usefulness of our techniques by providing
the first explicit linear kernels for -Dominating Set and -Scattered Set
on apex-minor-free graphs, and for Planar-\mathcal{F}-Deletion on graphs
excluding a fixed (topological) minor in the case where all the graphs in
\mathcal{F} are connected.Comment: 32 page
The Homeostasis Protocol: Avoiding Transaction Coordination Through Program Analysis
Datastores today rely on distribution and replication to achieve improved
performance and fault-tolerance. But correctness of many applications depends
on strong consistency properties - something that can impose substantial
overheads, since it requires coordinating the behavior of multiple nodes. This
paper describes a new approach to achieving strong consistency in distributed
systems while minimizing communication between nodes. The key insight is to
allow the state of the system to be inconsistent during execution, as long as
this inconsistency is bounded and does not affect transaction correctness. In
contrast to previous work, our approach uses program analysis to extract
semantic information about permissible levels of inconsistency and is fully
automated. We then employ a novel homeostasis protocol to allow sites to
operate independently, without communicating, as long as any inconsistency is
governed by appropriate treaties between the nodes. We discuss mechanisms for
optimizing treaties based on workload characteristics to minimize
communication, as well as a prototype implementation and experiments that
demonstrate the benefits of our approach on common transactional benchmarks
Tactics for Reasoning modulo AC in Coq
We present a set of tools for rewriting modulo associativity and
commutativity (AC) in Coq, solving a long-standing practical problem. We use
two building blocks: first, an extensible reflexive decision procedure for
equality modulo AC; second, an OCaml plug-in for pattern matching modulo AC. We
handle associative only operations, neutral elements, uninterpreted function
symbols, and user-defined equivalence relations. By relying on type-classes for
the reification phase, we can infer these properties automatically, so that
end-users do not need to specify which operation is A or AC, or which constant
is a neutral element.Comment: 16
Labeled Directed Acyclic Graphs: a generalization of context-specific independence in directed graphical models
We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.Comment: 26 pages, 17 figure
- …