20 research outputs found

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Tight lower bounds for certain parameterized NP-hard problems

    Get PDF
    Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of well-known NP-hard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on depth-t circuits cannot be solved in time no(k) poly(m), where n is the circuit input length, m is the circuit size, and k is the parameter, unless the (t − 1)-st level W [t − 1] of the W-hierarchy collapses to FPT. By refining this technique, we prove that a group of parameterized NP-hard problems, including weighted sat, dominating set, hitting set, set cover, and feature set, cannot be solved in time no(k) poly(m), where n is the size of the universal set from which the k elements are to be selected and m is the instance size, unless the first level W [1] of the W-hierarchy collapses to FPT. We also prove that another group of parameterized problems which includes weighted q-sat (for any fixed q ≥ 2), clique, and independent set, cannot be solved in time no(k) unless all search problems in the syntactic class SNP, introduced by Papadimitriou and Yannakakis, are solvable in subexponential time. Note that all these parameterized problems have trivial algorithms of running time either n k poly(m) or O(n k).

    Design and Implementation of High Performance Algorithms for the (n,k)-Universal Set Problem

    Get PDF
    The k-path problem is to find a simple path of length k. This problem is NP-complete and has applications in bioinformatics for detecting signaling pathways in protein interaction networks and for biological subnetwork matching. There are algorithms implemented to solve the problem for k up to 13. The fastest implementation has running time O^*(4.32^k), which is slower than the best known algorithm of running time O^*(4^k). To implement the best known algorithm for the k-path problem, we need to construct (n,k)-universal set. In this thesis, we study the practical algorithms for constructing the (n,k)-universal set problem. We propose six algorithm variants to handle the increasing computational time and memory space needed for k=3, 4, ..., 8. We propose two major empirical techniques that cut the time and space tremendously, yet generate good results. For the case k=7, the size of the universal set found by our algorithm is 1576, and is 4611 for the case k=8. We implement the proposed algorithms with the OpenMP parallel interface and construct universal sets for k=3, 4, ..., 8. Our experiments show that our algorithms for the (n,k)-universal set problem exhibit very good parallelism and hence shed light on its MPI implementation. Ours is the first implementation effort for the (n,k)-universal set problem. We share the effort by proposing an extensible universal set construction and retrieval system. This system integrates universal set construction algorithms and the universal sets constructed. The sets are stored in a centralized database and an interface is provided to access the database easily. The (n,k)-universal set have been applied to many other NP-complete problems such as the set splitting problems and the matching and packing problems. The small (n,k)-universal set constructed by us will reduce significantly the time to solve those problems

    Crown Reductions and Decompositions: Theoretical Results and Practical Methods

    Get PDF
    Two kernelization schemes for the vertex cover problem, an NP-hard problem in graph theory, are compared. The first, crown reduction, is based on the identification of a graph structure called a crown and is relatively new while the second, LP-kernelization has been used for some time. A proof of the crown reduction algorithm is presented, the algorithm is implemented and theorems are proven concerning its performance. Experiments are conducted comparing the performance of crown reduction and LP- kernelization on real world biological graphs. Next, theorems are presented that provide a logical connection between the crown structure and LP-kernelization. Finally, an algorithm is developed for decomposing a graph into two subgraphs: one that is a crown and one that is crown free

    Cobertura por vértices mínima em grafos lei de Potência

    Get PDF
    Orientador : Prof. Dr. Renato José da Silva CarmoCoorientador : Prof. Dr. André Luís VignattiDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 26/02/2016Inclui referências : f. 55-57Resumo: A teoria dos grafos é um ramo da matemática utilizada para modelar e representar um conjunto de elementos e suas relações, além de ser muito utilizada na resolução de problemas computacionais. Um grafo pode representar diversos sistemas naturais e digitais como ligações proteicas, redes sociais, conexões digitais, entre outras. Essas redes contêm diversas características, uma dessas é a distribuição de grau dos vértices. Muitos grafos do mundo real apresentam em sua estrutura uma distribuição de grau que segue uma lei de potência (Power Law), o que informalmente significa que existem poucos vértices de grau elevado, enquanto muitos vértices apresentam grau baixo. Dentro dos problemas clássicos algorítmicos, estamos interessados em problemas computacionalmente difíceis de serem resolvidos e que pertencem à classe NP-Difícil, especificamente o problema da cobertura por vértices mínima em grafos que apresentam uma distribuição de grau lei de potência. Assim, é apresentado neste trabalho um método inspirado em regras de redução ao núcleo do problema. Os resultados obtidos sugerem ser uma boa heurística de aproximação da solução _ótima, além de reduzir significativamente o tempo computacional na resolução do problema da cobertura por vértices. Palavras-chave: Grafos, Cobertura por Vértices, Regras de redução, Lei de Potência.Abstract: Graph theory is a branch of mathematics used to model and represent a set of elements and their relationships, which is also often used to solve computational problems. A graph can represent various natural and digital systems, such as: protein binding, social networks, digital connections, among others. Those networks contain diverse characteristics, which one of these is the degree of the distribution of the vertices. Many graphs of the real world have in their structure a degree of distribution following a power-law, where informally means that there are few high degree vertices, while many other vertices have a low degree. Within the algorithmic classic problems, we are interested in computationally difficult problems to be solved and which belong to the NP-Hard class, specifically the problem of minimum vertex cover in graphs, that have a power-law degree distribution. Thus, this work presents a method based on reduction rules to the core of the problem. The achieved results indicate that it is a good approximation heuristic from the optimal solution, in addition to be a technique that a significantly reduces on the computational time to solve the problem of minimum vertex cover. Keywords: Graph, Vertex Cover, reduction rules, Power Law

    Solving large FPT problems on coarse-grained parallel machines

    Get PDF
    Fixed-paramd er tractability (FPT) techniques have recently been successful in solving NP-complete problem instances of practicalimct tance which were too large to be solved with previousmevio s. In this paper, we show how to enhance this approach through the addition of parallelism thereby allowing even larger problem instances to be solved in practice. More precisely, wedem nstrate the potential of parallelism when applied to the bounded-tree search phase of FPT algorithmr We apply ourmrFWWqN ogy to the k-Vertex Cover problem which hasimFI tant applications in, forexamV e, the analysis ofmFWBNWW sequence align mgnF forcomN tational biochemchFI . We have ime emeF ed our parallel PTmFWWfi for using C and the MPI comFI4 cation library, and tested it on a 32-node Beowulf cluster. This is the first experimqBVB exam nation of parallel PT techniques. As part of our experi mperi we solved larger instances of k-Vertex Cover than in any previously reported imported ations. orexamfi e, our code can solve problem instances with kX400 in less than 1.5h

    Parallélisation massive des algorithmes de branchement

    Get PDF
    Les problèmes d'optimisation et de recherche sont souvent NP-complets et des techniques de force brute doivent généralement être mises en œuvre pour trouver des solutions exactes. Des problèmes tels que le regroupement de gènes en bio-informatique ou la recherche de routes optimales dans les réseaux de distribution peuvent être résolus en temps exponentiel à l'aide de stratégies de branchement récursif. Néanmoins, ces algorithmes deviennent peu pratiques au-delà de certaines tailles d'instances en raison du grand nombre de scénarios à explorer, pour lesquels des techniques de parallélisation sont nécessaires pour améliorer les performances. Dans des travaux antérieurs, des techniques centralisées et décentralisées ont été mises en œuvre afin d'augmenter le parallélisme des algorithmes de branchement tout en essayant de réduire les coûts de communication, qui jouent un rôle important dans les implémentations massivement parallèles en raison des messages passant entre les processus. Ainsi, notre travail consiste à développer une bibliothèque entièrement générique en C++, nommée GemPBA, pour accélérer presque tous les algorithmes de branchement avec une parallélisation massive, ainsi que le développement d'un outil novateur et simpliste d'équilibrage de charge dynamique pour réduire le nombre de messages transmis en envoyant les tâches prioritaires en premier. Notre approche utilise une stratégie hybride centralisée-décentralisée, qui fait appel à un processus central chargé d'attribuer les rôles des travailleurs par des messages de quelques bits, telles que les tâches n'ont pas besoin de passer par un processeur central. De plus, un processeur en fonctionnement génère de nouvelles tâches si et seulement s'il y a des processeurs disponibles pour les recevoir, garantissant ainsi leur transfert, ce qui réduit considérablement les coûts de communication. Nous avons réalisé nos expériences sur le problème de la couverture minimale de sommets, qui a montré des résultats remarquables, étant capable de résoudre même les graphes DIMACS les plus difficiles avec un simple algorithme MVC.Abstract: Optimization and search problems are often NP-complete, and brute-force techniques must typically be implemented to find exact solutions. Problems such as clustering genes in bioinformatics or finding optimal routes in delivery networks can be solved in exponential-time using recursive branching strategies. Nevertheless, these algorithms become impractical above certain instance sizes due to the large number of scenarios that need to be explored, for which parallelization techniques are necessary to improve the performance. In previous works, centralized and decentralized techniques have been implemented aiming to scale up parallelism on branching algorithms whilst attempting to reduce communication overhead, which plays a significant role in massively parallel implementations due to the messages passing across processes. Thus, our work consists of the development of a fully generic library in C++, named GemPBA, to speed up almost any branching algorithms with massive parallelization, along with the development of a novel and simplistic Dynamic Load Balancing tool to reduce the number of passed messages by sending high priority tasks first. Our approach uses a hybrid centralized-decentralized strategy, which makes use of a center process in charge of assigning worker roles by messages of a few bits of size, such that tasks do not need to pass through a center processor. Also, a working processor will spawn new tasks if and only if there are available processors to receive them, thus, guaranteeing its transfer, and thereby the communication overhead is notably decreased. We performed our experiments on the Minimum Vertex Cover problem, which showed remarkable results, being capable of solving even the toughest DIMACS graphs with a simple MVC algorithm

    Parametric Duality and Kernelization: Lower Bounds and Upper Bounds on Kernel Size

    Get PDF
    corecore