952 research outputs found
Efficient and Adaptive Parameterized Algorithms on Modular Decompositions
We study the influence of a graph parameter called modular-width on the time complexity for optimally solving well-known polynomial problems such as Maximum Matching, Triangle Counting, and Maximum s-t Vertex-Capacitated Flow. The modular-width of a graph depends on its (unique) modular decomposition tree, and can be computed in linear time O(n+m) for graphs with n vertices and m edges. Modular decompositions are an important tool for graph algorithms, e.g., for linear-time recognition of certain graph classes.
Throughout, we obtain efficient parameterized algorithms of running times O(f(mw)n+m), O(n+f(mw)m)or O(f(mw)+n+m) for low polynomial functions f and graphs of modular-width mw. Our algorithm for Maximum Matching, running in time O(mw^2 log mw n+m), is both faster and simpler than the recent O(mw^4n+m) time algorithm of Coudert et al. (SODA 2018). For several other problems, e.g., Triangle Counting and Maximum b-Matching, we give adaptive algorithms, meaning that their running times match the best unparameterized algorithms for worst-case modular-width of mw=Theta(n) and they outperform them already for mw=o(n), until reaching linear time for mw=O(1)
A fault-tolerant variational quantum algorithm with limited T-depth
We propose a variational quantum eigensolver (VQE) algorithm that uses a fault-tolerant (FT) gate-set, and is hence suitable for implementation on a future error-corrected quantum computer. VQE quantum circuits are typically designed for near-term, noisy quantum devices and have continuously parameterized rotation gates as the central building block. On the other hand, an FT quantum computer (FTQC) can only implement a discrete set of logical gates, such as the so-called Clifford+T gates. We show that the energy minimization of VQE can be performed with such an FT discrete gate-set, where we use the Ross-Selinger algorithm to transpile the continuous rotation gates to the error-correctable Clifford+T gate-set. We find that there is no loss of convergence when compared to the one of parameterized circuits if an adaptive accuracy of the transpilation is used in the VQE optimization. State preparation with VQE requires only a moderate number of T-gates, depending on the system size and transpilation accuracy. We demonstrate these properties on emulators for two prototypical spin models with up to 16 qubits. This is a promising result for the integration of VQE and more generally variational algorithms in the emerging FT setting, where they can form building blocks of the general quantum algorithms that will become accessible in an FTQC
The b-Matching Problem in Distance-Hereditary Graphs and Beyond
We make progress on the fine-grained complexity of Maximum-Cardinality Matching on graphs of bounded clique-width. Quasi linear-time algorithms for this problem have been recently proposed for the important subclasses of bounded-treewidth graphs (Fomin et al., SODA\u2717) and graphs of bounded modular-width (Coudert et al., SODA\u2718). We present such algorithm for bounded split-width graphs - a broad generalization of graphs of bounded modular-width, of which an interesting subclass are the distance-hereditary graphs. Specifically, we solve Maximum-Cardinality Matching in O((k log^2{k})*(m+n) * log{n})-time on graphs with split-width at most k. We stress that the existence of such algorithm was not even known for distance-hereditary graphs until our work. Doing so, we improve the state of the art (Dragan, WG\u2797) and we answer an open question of (Coudert et al., SODA\u2718). Our work brings more insights on the relationships between matchings and splits, a.k.a., join operations between two vertex-subsets in different connected components. Furthermore, our analysis can be extended to the more general (unit cost) b-Matching problem. On the way, we introduce new tools for b-Matching and dynamic programming over split decompositions, that can be of independent interest
On Adaptive Algorithms for Maximum Matching
In the fundamental Maximum Matching problem the task is to find a maximum cardinality set of pairwise disjoint edges in a given undirected graph. The fastest algorithm for this problem, due to Micali and Vazirani, runs in time O(sqrt{n}m) and stands unbeaten since 1980. It is complemented by faster, often linear-time, algorithms for various special graph classes. Moreover, there are fast parameterized algorithms, e.g., time O(km log n) relative to tree-width k, which outperform O(sqrt{n}m) when the parameter is sufficiently small.
We show that the Micali-Vazirani algorithm, and in fact any algorithm following the phase framework of Hopcroft and Karp, is adaptive to beneficial input structure. We exhibit several graph classes for which such algorithms run in linear time O(n+m). More strongly, we show that they run in time O(sqrt{k}m) for graphs that are k vertex deletions away from any of several such classes, without explicitly computing an optimal or approximate deletion set; before, most such bounds were at least Omega(km). Thus, any phase-based matching algorithm with linear-time phases obliviously interpolates between linear time for k=O(1) and the worst case of O(sqrt{n}m) when k=Theta(n). We complement our findings by proving that the phase framework by itself still allows Omega(sqrt{n}) phases, and hence time Omega(sqrt{n}m), even on paths, cographs, and bipartite chain graphs
Efficient parameterized algorithms on structured graphs
In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist.
Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte.
Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value.
In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems.
Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values.
As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure.
In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures.
Using algebraic expressions, we define new combined graph classes
of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms
on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout
Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc.
Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
Reliable Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement
A new technique is proposed for fault-tolerant linear, sesquilinear and
bijective (LSB) operations on integer data streams (), such as:
scaling, additions/subtractions, inner or outer vector products, permutations
and convolutions. In the proposed method, the input integer data streams
are linearly superimposed to form numerically-entangled integer data
streams that are stored in-place of the original inputs. A series of LSB
operations can then be performed directly using these entangled data streams.
The results are extracted from the entangled output streams by additions
and arithmetic shifts. Any soft errors affecting any single disentangled output
stream are guaranteed to be detectable via a specific post-computation
reliability check. In addition, when utilizing a separate processor core for
each of the streams, the proposed approach can recover all outputs after
any single fail-stop failure. Importantly, unlike algorithm-based fault
tolerance (ABFT) methods, the number of operations required for the
entanglement, extraction and validation of the results is linearly related to
the number of the inputs and does not depend on the complexity of the performed
LSB operations. We have validated our proposal in an Intel processor (Haswell
architecture with AVX2 support) via fast Fourier transforms, circular
convolutions, and matrix multiplication operations. Our analysis and
experiments reveal that the proposed approach incurs between to
reduction in processing throughput for a wide variety of LSB operations. This
overhead is 5 to 1000 times smaller than that of the equivalent ABFT method
that uses a checksum stream. Thus, our proposal can be used in fault-generating
processor hardware or safety-critical applications, where high reliability is
required without the cost of ABFT or modular redundancy.Comment: to appear in IEEE Trans. on Signal Processing, 201
- …