9,387 research outputs found
Set-Based Pre-Processing for Points-To Analysis
We present set-based pre-analysis: a virtually universal op-
timization technique for flow-insensitive points-to analysis.
Points-to analysis computes a static abstraction of how ob-
ject values flow through a program’s variables. Set-based
pre-analysis relies on the observation that much of this rea-
soning can take place at the set level rather than the value
level. Computing constraints at the set level results in sig-
nificant optimization opportunities: we can rewrite the in-
put program into a simplified form with the same essential
points-to properties. This rewrite results in removing both
local variables and instructions, thus simplifying the sub-
sequent value-based points-to computation. E
ectively, set-
based pre-analysis puts the program in a normal form opti-
mized for points-to analysis.
Compared to other techniques for o
-line optimization of
points-to analyses in the literature, the new elements of our
approach are the ability to eliminate statements, and not just
variables, as well as its modularity: set-based pre-analysis
can be performed on the input just once, e.g., allowing the
pre-optimization of libraries that are subsequently reused
many times and for di
erent analyses. In experiments with
Java programs, set-based pre-analysis eliminates 30% of the
program’s local variables and 30% or more of computed
context-sensitive points-to facts, over a wide set of bench-
marks and analyses, resulting in a
20% average speedup
(max: 110%, median: 18%)
Beyond Worst-Case Analysis for Joins with Minesweeper
We describe a new algorithm, Minesweeper, that is able to satisfy stronger
runtime guarantees than previous join algorithms (colloquially, `beyond
worst-case guarantees') for data in indexed search trees. Our first
contribution is developing a framework to measure this stronger notion of
complexity, which we call {\it certificate complexity}, that extends notions of
Barbay et al. and Demaine et al.; a certificate is a set of propositional
formulae that certifies that the output is correct. This notion captures a
natural class of join algorithms. In addition, the certificate allows us to
define a strictly stronger notion of runtime complexity than traditional
worst-case guarantees. Our second contribution is to develop a dichotomy
theorem for the certificate-based notion of complexity. Roughly, we show that
Minesweeper evaluates -acyclic queries in time linear in the certificate
plus the output size, while for any -cyclic query there is some instance
that takes superlinear time in the certificate (and for which the output is no
larger than the certificate size). We also extend our certificate-complexity
analysis to queries with bounded treewidth and the triangle query.Comment: [This is the full version of our PODS'2014 paper.
Optimal Orchestration of Virtual Network Functions
-The emergence of Network Functions Virtualization (NFV) is bringing a set of
novel algorithmic challenges in the operation of communication networks. NFV
introduces volatility in the management of network functions, which can be
dynamically orchestrated, i.e., placed, resized, etc. Virtual Network Functions
(VNFs) can belong to VNF chains, where nodes in a chain can serve multiple
demands coming from the network edges. In this paper, we formally define the
VNF placement and routing (VNF-PR) problem, proposing a versatile linear
programming formulation that is able to accommodate specific features and
constraints of NFV infrastructures, and that is substantially different from
existing virtual network embedding formulations in the state of the art. We
also design a math-heuristic able to scale with multiple objectives and large
instances. By extensive simulations, we draw conclusions on the trade-off
achievable between classical traffic engineering (TE) and NFV infrastructure
efficiency goals, evaluating both Internet access and Virtual Private Network
(VPN) demands. We do also quantitatively compare the performance of our VNF-PR
heuristic with the classical Virtual Network Embedding (VNE) approach proposed
for NFV orchestration, showing the computational differences, and how our
approach can provide a more stable and closer-to-optimum solution
Exponential Time Complexity of the Permanent and the Tutte Polynomial
We show conditional lower bounds for well-studied #P-hard problems:
(a) The number of satisfying assignments of a 2-CNF formula with n variables
cannot be counted in time exp(o(n)), and the same is true for computing the
number of all independent sets in an n-vertex graph.
(b) The permanent of an n x n matrix with entries 0 and 1 cannot be computed
in time exp(o(n)).
(c) The Tutte polynomial of an n-vertex multigraph cannot be computed in time
exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it
cannot be computed in time exp(o(n/polylog n)) in the case of simple graphs.
Our lower bounds are relative to (variants of) the Exponential Time
Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF
formulas cannot be decided in time exp(o(n)). We relax this hypothesis by
introducing its counting version #ETH, namely that the satisfying assignments
cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds,
we transfer the sparsification lemma for d-CNF formulas to the counting
setting
Memoization for Unary Logic Programming: Characterizing PTIME
We give a characterization of deterministic polynomial time computation based
on an algebraic structure called the resolution semiring, whose elements can be
understood as logic programs or sets of rewriting rules over first-order terms.
More precisely, we study the restriction of this framework to terms (and logic
programs, rewriting rules) using only unary symbols. We prove it is complete
for polynomial time computation, using an encoding of pushdown automata. We
then introduce an algebraic counterpart of the memoization technique in order
to show its PTIME soundness. We finally relate our approach and complexity
results to complexity of logic programming. As an application of our
techniques, we show a PTIME-completeness result for a class of logic
programming queries which use only unary function symbols.Comment: Soumis {\`a} LICS 201
Point-width and Max-CSPs
International audienceThe complexity of (unbounded-arity) Max-CSPs under structural restrictions is poorly understood. The two most general hypergraph properties known to ensure tractability of Max-CSPs, β-acyclicity and bounded (incidence) MIM-width, are incomparable and lead to very different algorithms. We introduce the framework of point decompositions for hypergraphs and use it to derive a new sufficient condition for the tractability of (structurally restricted) Max-CSPs, which generalises both bounded MIM-width and β-acyclicity. On the way, we give a new characterisation of bounded MIM-width and discuss other hypergraph properties which are relevant to the complexity of Max-CSPs, such as β-hypertreewidth
Reachability Analysis of Time Basic Petri Nets: a Time Coverage Approach
We introduce a technique for reachability analysis of Time-Basic (TB) Petri
nets, a powerful formalism for real- time systems where time constraints are
expressed as intervals, representing possible transition firing times, whose
bounds are functions of marking's time description. The technique consists of
building a symbolic reachability graph relying on a sort of time coverage, and
overcomes the limitations of the only available analyzer for TB nets, based in
turn on a time-bounded inspection of a (possibly infinite) reachability-tree.
The graph construction algorithm has been automated by a tool-set, briefly
described in the paper together with its main functionality and analysis
capability. A running example is used throughout the paper to sketch the
symbolic graph construction. A use case describing a small real system - that
the running example is an excerpt from - has been employed to benchmark the
technique and the tool-set. The main outcome of this test are also presented in
the paper. Ongoing work, in the perspective of integrating with a
model-checking engine, is shortly discussed.Comment: 8 pages, submitted to conference for publicatio
- …