28 research outputs found
Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths
Computing all-pairs shortest paths is a fundamental and much-studied problem
with many applications. Unfortunately, despite intense study, there are still
no significantly faster algorithms for it than the time
algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist
for the vertex-weighted version if fast matrix multiplication may be used.
Yuster (SODA 2009) gave an algorithm running in time ,
but no combinatorial, truly subcubic algorithm is known.
Motivated by the recent framework of efficient parameterized algorithms (or
"FPT in P"), we investigate the influence of the graph parameters clique-width
() and modular-width () on the running times of algorithms for solving
All-Pairs Shortest Paths. We obtain efficient (and combinatorial) parameterized
algorithms on non-negative vertex-weighted graphs of times
, resp. . If fast matrix
multiplication is allowed then the latter can be improved to
using the algorithm of Yuster as a black box.
The algorithm relative to modular-width is adaptive, meaning that the running
time matches the best unparameterized algorithm for parameter value equal
to , and they outperform them already for for any
Abelian Primitive Words
We investigate Abelian primitive words, which are words that are not Abelian
powers. We show that unlike classical primitive words, the set of Abelian
primitive words is not context-free. We can determine whether a word is Abelian
primitive in linear time. Also different from classical primitive words, we
find that a word may have more than one Abelian root. We also consider
enumeration problems and the relation to the theory of codes
Efficient and Adaptive Parameterized Algorithms on Modular Decompositions
We study the influence of a graph parameter called modular-width on the time complexity for optimally solving well-known polynomial problems such as Maximum Matching, Triangle Counting, and Maximum s-t Vertex-Capacitated Flow. The modular-width of a graph depends on its (unique) modular decomposition tree, and can be computed in linear time O(n+m) for graphs with n vertices and m edges. Modular decompositions are an important tool for graph algorithms, e.g., for linear-time recognition of certain graph classes.
Throughout, we obtain efficient parameterized algorithms of running times O(f(mw)n+m), O(n+f(mw)m)or O(f(mw)+n+m) for low polynomial functions f and graphs of modular-width mw. Our algorithm for Maximum Matching, running in time O(mw^2 log mw n+m), is both faster and simpler than the recent O(mw^4n+m) time algorithm of Coudert et al. (SODA 2018). For several other problems, e.g., Triangle Counting and Maximum b-Matching, we give adaptive algorithms, meaning that their running times match the best unparameterized algorithms for worst-case modular-width of mw=Theta(n) and they outperform them already for mw=o(n), until reaching linear time for mw=O(1)
An Algorithm for the Exact Treedepth Problem
We present a novel algorithm for the minimum-depth elimination tree problem, which is equivalent to the optimal treedepth decomposition problem. Our algorithm makes use of two cheaply-computed lower bound functions to prune the search tree, along with symmetry-breaking and domination rules. We present an empirical study showing that the algorithm outperforms the current state-of-the-art solver (which is based on a SAT encoding) by orders of magnitude on a range of graph classes
A Scalable Algorithm for Metric High-Quality Clustering in Information Retrieval Tasks
We consider the problem of finding efficiently a high quality k-clustering of n points in a (possibly discrete) metric space. Many methods are known when the point are vectors in a real vector space, and the distance function is a standard geometric distance such as L1, L2 (Euclidean) or L2 2 (squared Euclidean distance). In such cases efficiency is often sought via sophisticated multidimensional search structures for speeding up nearest neighbor queries (e.g. variants of kd-trees). Such techniques usually work well in spaces of moderately high dimension say up to 6 or 8). Our target is a scenario in which either the metric space cannot be mapped into a vector space, or, if this mapping is possible, the dimension of such a space is so high as to rule out the use of the above mentioned techniques. This setting is rather typical in Information Retrieval applications. We augment the well known furthest-point-first algorithm for kcenter clustering in metric spaces with a filtering step based on the triangular inequality and we compare this algorithm with some recent fast variants of the classical k-means iterative algorithm augmented with an analogous filtering schemes. We extensively tested the two solutions on synthetic geometric data and real data from Information Retrieval applications. The main conclusion we draw is that our modified furthest-point-first method attains solutions of better or comparable quality within a fraction of the time used by the fast k-means algorithm. Thus our algorithm is valuable when either real time constraints or the large amount of data highlight the poor scalability of traditional clustering methods
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Parameterization and approximation are two popular ways of coping with
NP-hard problems. More recently, the two have also been combined to derive many
interesting results. We survey developments in the area both from the
algorithmic and hardness perspectives, with emphasis on new techniques and
potential future research directions