11,156 research outputs found
A survey on algorithmic aspects of modular decomposition
The modular decomposition is a technique that applies but is not restricted
to graphs. The notion of module naturally appears in the proofs of many graph
theoretical theorems. Computing the modular decomposition tree is an important
preprocessing step to solve a large number of combinatorial optimization
problems. Since the first polynomial time algorithm in the early 70's, the
algorithmic of the modular decomposition has known an important development.
This paper survey the ideas and techniques that arose from this line of
research
Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)
We consider the problem of verifying liveness for systems with a finite, but
unbounded, number of processes, commonly known as parameterised systems.
Typical examples of such systems include distributed protocols (e.g. for the
dining philosopher problem). Unlike the case of verifying safety, proving
liveness is still considered extremely challenging, especially in the presence
of randomness in the system. In this paper we consider liveness under arbitrary
(including unfair) schedulers, which is often considered a desirable property
in the literature of self-stabilising systems. We introduce an automatic method
of proving liveness for randomised parameterised systems under arbitrary
schedulers. Viewing liveness as a two-player reachability game (between
Scheduler and Process), our method is a CEGAR approach that synthesises a
progress relation for Process that can be symbolically represented as a
finite-state automaton. The method is incremental and exploits both
Angluin-style L*-learning and SAT-solvers. Our experiments show that our
algorithm is able to prove liveness automatically for well-known randomised
distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher
Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon
Protocol). To the best of our knowledge, this is the first fully-automatic
method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape
Fixed-Parameter Algorithms for Computing Kemeny Scores - Theory and Practice
The central problem in this work is to compute a ranking of a set of elements
which is "closest to" a given set of input rankings of the elements. We define
"closest to" in an established way as having the minimum sum of Kendall-Tau
distances to each input ranking. Unfortunately, the resulting problem Kemeny
consensus is NP-hard for instances with n input rankings, n being an even
integer greater than three. Nevertheless this problem plays a central role in
many rank aggregation problems. It was shown that one can compute the
corresponding Kemeny consensus list in f(k) + poly(n) time, being f(k) a
computable function in one of the parameters "score of the consensus", "maximum
distance between two input rankings", "number of candidates" and "average
pairwise Kendall-Tau distance" and poly(n) a polynomial in the input size. This
work will demonstrate the practical usefulness of the corresponding algorithms
by applying them to randomly generated and several real-world data. Thus, we
show that these fixed-parameter algorithms are not only of theoretical
interest. In a more theoretical part of this work we will develop an improved
fixed-parameter algorithm for the parameter "score of the consensus" having a
better upper bound for the running time than previous algorithms.Comment: Studienarbei
A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis
We present the first scalable bound analysis that achieves amortized
complexity analysis. In contrast to earlier work, our bound analysis is not
based on general purpose reasoners such as abstract interpreters, software
model checkers or computer algebra tools. Rather, we derive bounds directly
from abstract program models, which we obtain from programs by comparatively
simple invariant generation and symbolic execution techniques. As a result, we
obtain an analysis that is more predictable and more scalable than earlier
approaches. Our experiments demonstrate that our analysis is fast and at the
same time able to compute bounds for challenging loops in a large real-world
benchmark. Technically, our approach is based on lossy vector addition systems
(VASS). Our bound analysis first computes a lexicographic ranking function that
proves the termination of a VASS, and then derives a bound from this ranking
function. Our methodology achieves amortized analysis based on a new insight
how lexicographic ranking functions can be used for bound analysis
The earlier the better: a theory of timed actor interfaces
Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. Toward this, we introduce a theory of timed actors whose notion of refinement is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting sets of behaviors. Our refinement allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We show how our theory relates to, and can be used to reconcile existing time and performance models and their established theories
Lost in Abstraction: Monotonicity in Multi-Threaded Programs (Extended Technical Report)
Monotonicity in concurrent systems stipulates that, in any global state,
extant system actions remain executable when new processes are added to the
state. This concept is not only natural and common in multi-threaded software,
but also useful: if every thread's memory is finite, monotonicity often
guarantees the decidability of safety property verification even when the
number of running threads is unknown. In this paper, we show that the act of
obtaining finite-data thread abstractions for model checking can be at odds
with monotonicity: Predicate-abstracting certain widely used monotone software
results in non-monotone multi-threaded Boolean programs - the monotonicity is
lost in the abstraction. As a result, well-established sound and complete
safety checking algorithms become inapplicable; in fact, safety checking turns
out to be undecidable for the obtained class of unbounded-thread Boolean
programs. We demonstrate how the abstract programs can be modified into
monotone ones, without affecting safety properties of the non-monotone
abstraction. This significantly improves earlier approaches of enforcing
monotonicity via overapproximations
Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation
The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of
a state partition which is a bisimulation on some Kripke structure is well
known. It is also well known in model checking that bisimulation is equivalent
to strong preservation of CTL, or, equivalently, of Hennessy-Milner logic.
Drawing on these observations, we analyze the basic steps of the PT algorithm
from an abstract interpretation perspective, which allows us to reason on
strong preservation in the context of generic inductively defined (temporal)
languages and of possibly non-partitioning abstract models specified by
abstract interpretation. This leads us to design a generalized Paige-Tarjan
algorithm, called GPT, for computing the minimal refinement of an abstract
interpretation-based model that strongly preserves some given language. It
turns out that PT is a straight instance of GPT on the domain of state
partitions for the case of strong preservation of Hennessy-Milner logic. We
provide a number of examples showing that GPT is of general use. We first show
how a well-known efficient algorithm for computing stuttering equivalence can
be viewed as a simple instance of GPT. We then instantiate GPT in order to
design a new efficient algorithm for computing simulation equivalence that is
competitive with the best available algorithms. Finally, we show how GPT allows
to compute new strongly preserving abstract models by providing an efficient
algorithm that computes the coarsest refinement of a given partition that
strongly preserves the language generated by the reachability operator.Comment: Keywords: Abstract interpretation, abstract model checking, strong
preservation, Paige-Tarjan algorithm, refinement algorith
- …