44 research outputs found
On Approximate Compressions for Connected Minor-Hitting Sets
In the Connected ?-Deletion problem, ? is a fixed finite family of graphs and the objective is to compute a minimum set of vertices (or a vertex set of size at most k for some given k) such that (a) this set induces a connected subgraph of the given graph and (b) deleting this set results in a graph which excludes every F ? ? as a minor. In the area of kernelization, this problem is well known to exclude a polynomial kernel subject to standard complexity hypotheses even in very special cases such as ? = K?, i.e., Connected Vertex Cover.
In this work, we give a (2+?)-approximate polynomial compression for the Connected ?-Deletion problem when ? contains at least one planar graph. This is the first approximate polynomial compression result for this generic problem. As a corollary, we obtain the first approximate polynomial compression result for the special case of Connected ?-Treewidth Deletion
Grundy Distinguishes Treewidth from Pathwidth
Structural graph parameters, such as treewidth, pathwidth, and clique-width,
are a central topic of study in parameterized complexity. A main aim of
research in this area is to understand the "price of generality" of these
widths: as we transition from more restrictive to more general notions, which
are the problems that see their complexity status deteriorate from
fixed-parameter tractable to intractable? This type of question is by now very
well-studied, but, somewhat strikingly, the algorithmic frontier between the
two (arguably) most central width notions, treewidth and pathwidth, is still
not understood: currently, no natural graph problem is known to be W-hard for
one but FPT for the other. Indeed, a surprising development of the last few
years has been the observation that for many of the most paradigmatic problems,
their complexities for the two parameters actually coincide exactly, despite
the fact that treewidth is a much more general parameter. It would thus appear
that the extra generality of treewidth over pathwidth often comes "for free".
Our main contribution in this paper is to uncover the first natural example
where this generality comes with a high price. We consider Grundy Coloring, a
variation of coloring where one seeks to calculate the worst possible coloring
that could be assigned to a graph by a greedy First-Fit algorithm. We show that
this well-studied problem is FPT parameterized by pathwidth; however, it
becomes significantly harder (W[1]-hard) when parameterized by treewidth.
Furthermore, we show that Grundy Coloring makes a second complexity jump for
more general widths, as it becomes para-NP-hard for clique-width. Hence, Grundy
Coloring nicely captures the complexity trade-offs between the three most
well-studied parameters. Completing the picture, we show that Grundy Coloring
is FPT parameterized by modular-width.Comment: To be published in proceedings of ESA 202
Orthogonal dissection into few rectangles
We describe a polynomial time algorithm that takes as input a polygon with
axis-parallel sides but irrational vertex coordinates, and outputs a set of as
few rectangles as possible into which it can be dissected by axis-parallel cuts
and translations. The number of rectangles is the rank of the Dehn invariant of
the polygon. The same method can also be used to dissect an axis-parallel
polygon into a simple polygon with the minimum possible number of edges. When
rotations or reflections are allowed, we can approximate the minimum number of
rectangles to within a factor of two.Comment: 18 pages, 8 figures. This version adds results on dissection with
rotations and reflection
On the connection of probabilistic model checking, planning, and learning for system verification
This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt
Semantic discovery and reuse of business process patterns
Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse