167 research outputs found
Efficient parameterized algorithms on structured graphs
In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist.
Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte.
Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value.
In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems.
Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values.
As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure.
In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures.
Using algebraic expressions, we define new combined graph classes
of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms
on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout
Open Problems in (Hyper)Graph Decomposition
Large networks are useful in a wide range of applications. Sometimes problem
instances are composed of billions of entities. Decomposing and analyzing these
structures helps us gain new insights about our surroundings. Even if the final
application concerns a different problem (such as traversal, finding paths,
trees, and flows), decomposing large graphs is often an important subproblem
for complexity reduction or parallelization. This report is a summary of
discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph
Decomposition" and presents currently open problems and future directions in
the area of (hyper)graph decomposition
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Resolving Prime Modules: The Structure of Pseudo-cographs and Galled-Tree Explainable Graphs
The modular decomposition of a graph is a natural construction to capture
key features of in terms of a labeled tree whose vertices are
labeled as "series" (), "parallel" () or "prime". However, full
information of is provided by its modular decomposition tree only,
if is a cograph, i.e., does not contain prime modules. In this case,
explains , i.e., if and only if the lowest common
ancestor of and has label "". Pseudo-cographs,
or, more general, GaTEx graphs are graphs that can be explained by labeled
galled-trees, i.e., labeled networks that are obtained from the modular
decomposition tree of by replacing the prime vertices in by
simple labeled cycles. GaTEx graphs can be recognized and labeled galled-trees
that explain these graphs can be constructed in linear time.
In this contribution, we provide a novel characterization of GaTEx graphs in
terms of a set of 25 forbidden induced subgraphs.
This characterization, in turn, allows us to show that GaTEx graphs are closely
related to many other well-known graph classes such as -sparse and
-reducible graphs, weakly-chordal graphs, perfect graphs with perfect
order, comparability and permutation graphs, murky graphs as well as interval
graphs, Meyniel graphs or very strongly-perfect and brittle graphs. Moreover,
we show that every GaTEx graph as twin-width at most 1.Comment: 18 pages, 3 figure
Computing Well-Covered Vector Spaces of Graphs using Modular Decomposition
A graph is well-covered if all its maximal independent sets have the same
cardinality. This well studied concept was introduced by Plummer in 1970 and
naturally generalizes to the weighted case. Given a graph , a real-valued
vertex weight function is said to be a well-covered weighting of if all
its maximal independent sets are of the same weight. The set of all
well-covered weightings of a graph forms a vector space over the field of
real numbers, called the well-covered vector space of . Since the problem of
recognizing well-covered graphs is --complete, the
problem of computing the well-covered vector space of a given graph is
--hard. Levit and Tankus showed in 2015 that the
problem admits a polynomial-time algorithm in the class of claw-free graph. In
this paper, we give two general reductions for the problem, one based on
anti-neighborhoods and one based on modular decomposition, combined with
Gaussian elimination. Building on these results, we develop a polynomial-time
algorithm for computing the well-covered vector space of a given fork-free
graph, generalizing the result of Levit and Tankus. Our approach implies that
well-covered fork-free graphs can be recognized in polynomial time and also
generalizes some known results on cographs.Comment: 25 page
Algebraic, Block and Multiplicative Preconditioners based on Fast Tridiagonal Solves on GPUs
This thesis contributes to the field of sparse linear algebra, graph applications, and preconditioners for Krylov iterative solvers of sparse linear equation systems, by providing a (block) tridiagonal solver library, a generalized sparse matrix-vector implementation, a linear forest extraction, and a multiplicative preconditioner based on tridiagonal solves. The tridiagonal library, which supports (scaled) partial pivoting, outperforms cuSPARSE's tridiagonal solver by factor five while completely utilizing the available GPU memory bandwidth. For the performance optimized solving of multiple right-hand sides, the explicit factorization of the tridiagonal matrix can be computed. The extraction of a weighted linear forest (union of disjoint paths) from a general graph is used to build algebraic (block) tridiagonal preconditioners and deploys the generalized sparse-matrix vector implementation of this thesis for preconditioner construction. During linear forest extraction, a new parallel bidirectional scan pattern, which can operate on double-linked list structures, identifies the path ID and the position of a vertex. The algebraic preconditioner construction is also used to build more advanced preconditioners, which contain multiple tridiagonal factors, based on generalized ILU factorizations. Additionally, other preconditioners based on tridiagonal factors are presented and evaluated in comparison to ILU and ILU incomplete sparse approximate inverse preconditioners (ILU-ISAI) for the solution of large sparse linear equation systems from the Sparse Matrix Collection. For all presented problems of this thesis, an efficient parallel algorithm and its CUDA implementation for single GPU systems is provided
Approximate CFTs and Random Tensor Models
A key issue in both the field of quantum chaos and quantum gravity is an
effective description of chaotic conformal field theories (CFTs), that is CFTs
that have a quantum ergodic limit. We develop a framework incorporating the
constraints of conformal symmetry and locality, allowing the definition of
ensembles of `CFT data'. These ensembles take on the same role as the ensembles
of random Hamiltonians in more conventional quantum ergodic phases of many-body
quantum systems. To describe individual members of the ensembles, we introduce
the notion of approximate CFT, defined as a collection of `CFT data' satisfying
the usual CFT constraints approximately, i.e. up to small deviations. We show
that they generically exist by providing concrete examples. Ensembles of
approximate CFTs are very natural in holography, as every member of the
ensemble is indistinguishable from a true CFT for low-energy probes that only
have access to information from semi-classical gravity. To specify these
ensembles, we impose successively higher moments of the CFT constraints.
Lastly, we propose a theory of pure gravity in AdS as a random
matrix/tensor model implementing approximate CFT constraints. This tensor model
is the maximum ignorance ensemble compatible with conformal symmetry, crossing
invariance, and a primary gap to the black-hole threshold. The resulting theory
is a random matrix/tensor model governed by the Virasoro 6j-symbol.Comment: 45 pages + appendices, 6 figure
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Stabilizing reinforcement learning control: A modular framework for optimizing over all stable behavior
We propose a framework for the design of feedback controllers that combines
the optimization-driven and model-free advantages of deep reinforcement
learning with the stability guarantees provided by using the Youla-Kucera
parameterization to define the search domain. Recent advances in behavioral
systems allow us to construct a data-driven internal model; this enables an
alternative realization of the Youla-Kucera parameterization based entirely on
input-output exploration data. Perhaps of independent interest, we formulate
and analyze the stability of such data-driven models in the presence of noise.
The Youla-Kucera approach requires a stable "parameter" for controller design.
For the training of reinforcement learning agents, the set of all stable linear
operators is given explicitly through a matrix factorization approach.
Moreover, a nonlinear extension is given using a neural network to express a
parameterized set of stable operators, which enables seamless integration with
standard deep learning libraries. Finally, we show how these ideas can also be
applied to tune fixed-structure controllers.Comment: Preprint; 18 pages. arXiv admin note: text overlap with
arXiv:2304.0342
- …