909 research outputs found
Fast Parallel Fixed-Parameter Algorithms via Color Coding
Fixed-parameter algorithms have been successfully applied to solve numerous
difficult problems within acceptable time bounds on large inputs. However, most
fixed-parameter algorithms are inherently \emph{sequential} and, thus, make no
use of the parallel hardware present in modern computers. We show that parallel
fixed-parameter algorithms do not only exist for numerous parameterized
problems from the literature -- including vertex cover, packing problems,
cluster editing, cutting vertices, finding embeddings, or finding matchings --
but that there are parallel algorithms working in \emph{constant} time or at
least in time \emph{depending only on the parameter} (and not on the size of
the input) for these problems. Phrased in terms of complexity classes, we place
numerous natural parameterized problems in parameterized versions of AC. On
a more technical level, we show how the \emph{color coding} method can be
implemented in constant time and apply it to embedding problems for graphs of
bounded tree-width or tree-depth and to model checking first-order formulas in
graphs of bounded degree
Scalable Kernelization for Maximum Independent Sets
The most efficient algorithms for finding maximum independent sets in both
theory and practice use reduction rules to obtain a much smaller problem
instance called a kernel. The kernel can then be solved quickly using exact or
heuristic algorithms---or by repeatedly kernelizing recursively in the
branch-and-reduce paradigm. It is of critical importance for these algorithms
that kernelization is fast and returns a small kernel. Current algorithms are
either slow but produce a small kernel, or fast and give a large kernel. We
attempt to accomplish both of these goals simultaneously, by giving an
efficient parallel kernelization algorithm based on graph partitioning and
parallel bipartite maximum matching. We combine our parallelization techniques
with two techniques to accelerate kernelization further: dependency checking
that prunes reductions that cannot be applied, and reduction tracking that
allows us to stop kernelization when reductions become less fruitful. Our
algorithm produces kernels that are orders of magnitude smaller than the
fastest kernelization methods, while having a similar execution time.
Furthermore, our algorithm is able to compute kernels with size comparable to
the smallest known kernels, but up to two orders of magnitude faster than
previously possible. Finally, we show that our kernelization algorithm can be
used to accelerate existing state-of-the-art heuristic algorithms, allowing us
to find larger independent sets faster on large real-world networks and
synthetic instances.Comment: Extended versio
Open Problems in (Hyper)Graph Decomposition
Large networks are useful in a wide range of applications. Sometimes problem
instances are composed of billions of entities. Decomposing and analyzing these
structures helps us gain new insights about our surroundings. Even if the final
application concerns a different problem (such as traversal, finding paths,
trees, and flows), decomposing large graphs is often an important subproblem
for complexity reduction or parallelization. This report is a summary of
discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph
Decomposition" and presents currently open problems and future directions in
the area of (hyper)graph decomposition
Parallélisation massive des algorithmes de branchement
Les problĂšmes d'optimisation et de recherche sont souvent NP-complets et des techniques de force brute doivent gĂ©nĂ©ralement ĂȘtre mises en Ćuvre pour trouver des solutions exactes. Des problĂšmes tels que le regroupement de gĂšnes en bio-informatique ou la recherche de routes optimales dans les rĂ©seaux de distribution peuvent ĂȘtre rĂ©solus en temps exponentiel Ă l'aide de stratĂ©gies de branchement rĂ©cursif. NĂ©anmoins, ces algorithmes deviennent peu pratiques au-delĂ de certaines tailles d'instances en raison du grand nombre de scĂ©narios Ă explorer, pour lesquels des techniques de parallĂ©lisation sont nĂ©cessaires pour amĂ©liorer les performances.
Dans des travaux antĂ©rieurs, des techniques centralisĂ©es et dĂ©centralisĂ©es ont Ă©tĂ© mises en Ćuvre afin d'augmenter le parallĂ©lisme des algorithmes de branchement tout en essayant de rĂ©duire les coĂ»ts de communication, qui jouent un rĂŽle important dans les implĂ©mentations massivement parallĂšles en raison des messages passant entre les processus.
Ainsi, notre travail consiste à développer une bibliothÚque entiÚrement générique en C++, nommée GemPBA, pour accélérer presque tous les algorithmes de branchement avec une parallélisation massive, ainsi que le développement d'un outil novateur et simpliste d'équilibrage de charge dynamique pour réduire le nombre de messages transmis en envoyant les tùches prioritaires en premier. Notre approche utilise une stratégie hybride centralisée-décentralisée, qui fait appel à un processus central chargé d'attribuer les rÎles des travailleurs par des messages de quelques bits,
telles que les tĂąches n'ont pas besoin de passer par un processeur central.
De plus, un processeur en fonctionnement génÚre de nouvelles tùches si et seulement s'il y a des processeurs disponibles pour les recevoir, garantissant ainsi leur transfert, ce qui réduit considérablement les coûts de communication.
Nous avons rĂ©alisĂ© nos expĂ©riences sur le problĂšme de la couverture minimale de sommets, qui a montrĂ© des rĂ©sultats remarquables, Ă©tant capable de rĂ©soudre mĂȘme les graphes DIMACS les plus difficiles avec un simple algorithme MVC.Abstract: Optimization and search problems are often NP-complete, and brute-force techniques
must typically be implemented to find exact solutions. Problems such as clustering
genes in bioinformatics or finding optimal routes in delivery networks can be
solved in exponential-time using recursive branching strategies. Nevertheless, these
algorithms become impractical above certain instance sizes due to the large number of
scenarios that need to be explored, for which parallelization techniques are necessary
to improve the performance.
In previous works, centralized and decentralized techniques have been implemented
aiming to scale up parallelism on branching algorithms whilst attempting to
reduce communication overhead, which plays a significant role in massively parallel
implementations due to the messages passing across processes.
Thus, our work consists of the development of a fully generic library in C++,
named GemPBA, to speed up almost any branching algorithms with massive parallelization,
along with the development of a novel and simplistic Dynamic Load Balancing
tool to reduce the number of passed messages by sending high priority tasks
first. Our approach uses a hybrid centralized-decentralized strategy, which makes use
of a center process in charge of assigning worker roles by messages of a few bits of
size, such that tasks do not need to pass through a center processor.
Also, a working processor will spawn new tasks if and only if there are available
processors to receive them, thus, guaranteeing its transfer, and thereby the communication
overhead is notably decreased.
We performed our experiments on the Minimum Vertex Cover problem, which showed
remarkable results, being capable of solving even the toughest DIMACS graphs
with a simple MVC algorithm
A Parameterisation of Algorithms for Distributed Constraint Optimisation via Potential Games
This paper introduces a parameterisation of learning algorithms for distributed constraint optimisation problems (DCOPs). This parameterisation encompasses many algorithms developed in both the computer science and game theory literatures. It is built on our insight that when formulated as noncooperative games, DCOPs form a subset of the class of potential games. This result allows us to prove convergence properties of algorithms developed in the computer science literature using game theoretic methods. Furthermore, our parameterisation can assist system designers by making the pros and cons of, and the synergies between, the various DCOP algorithm components clear
A Reconfigurable Computing Solution to the Parameterized Vertex Cover Problem
Active research has been done in the past two decades in the field of computational intractability. This thesis explores parallel implementations on a RC (reconfigurable computing) platform for FPT (fixed-parameter tractable) algorithms.
Reconfigurable hardware implementations of algorithms for solving NP-Complete problems have been of great interest for research in the past few years. However, most of the research that has been done target exact algorithms for solving problems of this nature. Although such implementations have generated good results, it should be kept in mind that the input sizes were small. Moreover, most of these implementations are instance-specific in nature making it mandatory to generate a different circuit for every new problem instance.
In this work, we present an efficient and scalable algorithm that breaks out of the conventional instance-specific approach towards a more general parameterized approach to solve such problems. We present approaches based on the theory of fixed-parameter tractability. The prototype problem used as a case study here is the classic vertex cover problem. The hardware implementation has demonstrated speedups of the order of 100x over the software version of the vertex cover problem
- âŠ