1,017 research outputs found

    Paradigms for Parameterized Enumeration

    Full text link
    The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First we define formally different notions of efficient enumeration in the context of parameterized complexity. Second we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems.Comment: Accepted for MFCS 2013; long version of the pape

    From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz

    Full text link
    The next few years will be exciting as prototype universal quantum processors emerge, enabling implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation, and which have the potential to significantly expand the breadth of quantum computing applications. A leading candidate is Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates between applying a cost-function-based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the Quantum Alternating Operator Ansatz, is the consideration of general parametrized families of unitaries rather than only those corresponding to the time-evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach to a wide variety of approximate optimization, exact optimization, and sampling problems. Here, we introduce the Quantum Alternating Operator Ansatz, lay out design criteria for mixing operators, detail mappings for eight problems, and provide brief descriptions of mappings for diverse problems.Comment: 51 pages, 2 figures. Revised to match journal pape

    Complexity and Approximability of Parameterized MAX-CSPs

    Get PDF
    International audienceWe study the optimization version of constraint satisfaction problems (Max-CSPs) in the framework of parameterized complexity; the goal is to compute the maximum fraction of constraints that can be satisfied simultaneously. In standard CSPs, we want to decide whether this fraction equals one. The parameters we investigate are structural measures, such as the treewidth or the clique-width of the variable-constraint incidence graph of the CSP instance.We consider Max-CSPs with the constraint types AND, OR, PARITY, and MAJORITY, and with various parameters k, and we attempt to fully classify them into the following three cases: 1. The exact optimum can be computed in FPT time. 2. It is W[1]-hard to compute the exact optimum, but there is a randomized FPT approximation scheme (FPTAS), which computes a (1−ϵ)-approximation in time f(k,ϵ)⋅poly(n). 3. There is no FPTAS unless FPT=W[1].For the corresponding standard CSPs, we establish FPT vs. W[1]-hardness results

    16th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2018, June 18-20, 2018, Malmö University, Malmö, Sweden

    Get PDF

    Fixed-parameter algorithms for some combinatorial problems in bioinformatics

    Get PDF
    Fixed-parameterized algorithmics has been developed in 1990s as an approach to solve NP-hard problem optimally in a guaranteed running time. It offers a new opportunity to solve NP-hard problems exactly even on large problem instances. In this thesis, we apply fixed-parameter algorithms to cope with three NP-hard problems in bioinformatics: Flip Consensus Tree Problem is a combinatorial problem arising in computational phylogenetics. Using the formulation of the Flip Consensus Tree Problem as a graph-modification problem, we present a set of data reduction rules and two fixed-parameter algorithms with respect to the number of modifications. Additionally, we discuss several heuristic improvements to accelerate the running time of our algorithms in practice. We also report computational results on phylogenetic data. Weighted Cluster Editing Problem is a graph-modification problem, that arises in computational biology when clustering objects with respect to a given similarity or distance measure. We present one of our fixed-parameter algorithms with respect to the minimum modification cost and describe the idea of our fastest algorithm for this problem and its unweighted counterpart. Bond Order Assignment Problem asks for a bond order assignment of a molecule graph that minimizes a penalty function. We prove several complexity results on this problem and give two exact fixed-parameter algorithms for the problem. Our algorithms base on the dynamic programming approach on a tree decomposition of the molecule graph. Our algorithms are fixed-parameter with respect to the treewidth of the molecule graph and the maximum atom valence. We implemented one of our algorithms with several heuristic improvements and evaluate our algorithm on a set of real molecule graphs. It turns out that our algorithm is very fast on this dataset and even outperforms a heuristic algorithm that is usually used in practice

    Parametrised enumeration

    Get PDF
    In this thesis, we develop a framework of parametrised enumeration complexity. At first, we provide the reader with preliminary notions such as machine models and complexity classes besides proving them to be well-chosen. Then, we study the interplay and the landscape of these classes and present connections to classical enumeration classes. Afterwards, we translate the fundamental methods of kernelisation and self-reducibility into equivalent techniques in the setting of parametrised enumeration. Subsequently, we illustrate the introduced classes by investigating the parametrised enumeration complexity of Max-Ones-SAT and strong backdoor sets as well as sharpen the first result by presenting a dichotomy theorem for Max-Ones-SAT. After this, we extend the definitions of parametrised enumeration algorithms by allowing orders on the solution space. In this context, we study the relations ``order by size'' and ``lexicographic order'' for graph modification problems and observe a trade-off between enumeration delay and space requirements of enumeration algorithms. These results then yield an enumeration technique for generalised modification problems that is illustrated by applying this method to the problems closest string, weak and strong backdoor sets, and weighted satisfiability. Eventually, we consider the enumeration of satisfying teams of formulas of poor man's propositional dependence logic. There, we present an enumeration algorithm with FPT delay and exponential space which is one of the first enumeration complexity results of a problem in a team logic. Finally, we show how this algorithm can be modified such that only polynomial space is required, however, by increasing the delay to incremental FPT time.In diesem Werk begründen wir die Theorie der parametrisierten Enumeration, präsentieren die grundlegenden Definitionen und prüfen ihre Sinnhaftigkeit. Im nächsten Schritt, untersuchen wir das Zusammenspiel der eingeführten Komplexitätsklassen und zeigen Verbindungen zur klassischen Enumerationskomplexität auf. Anschließend übertragen wir die zwei fundamentalen Techniken der Kernelisierung und Selbstreduzierbarkeit in Entsprechungen in dem Gebiet der parametrisierten Enumeration. Schließlich untersuchen wir das Problem Max-Ones-SAT und das Problem der Aufzählung starker Backdoor-Mengen als typische Probleme in diesen Klassen. Die vorherigen Resultate zu Max-Ones-SAT werden anschließend in einem Dichotomie-Satz vervollständigt. Im nächsten Abschnitt erweitern wir die neuen Definitionen auf Ordnungen (auf dem Lösungsraum) und erforschen insbesondere die zwei Relationen \glqq Größenordnung\grqq\ und \glqq lexikographische Reihenfolge\grqq\ im Kontext von Graphen-Modifikationsproblemen. Hierbei scheint es, als müsste man zwischen Delay und Speicheranforderungen von Aufzählungsalgorithmen abwägen, wobei dies jedoch nicht abschließend gelöst werden kann. Aus den vorherigen Überlegungen wird schließlich ein generisches Enumerationsverfahren für allgemeine Modifikationsprobleme entwickelt und anhand der Probleme Closest String, schwacher und starker Backdoor-Mengen sowie gewichteter Erfüllbarkeit veranschaulicht. Im letzten Abschnitt betrachten wir die parametrisierte Enumerationskomplexität von Erfüllbarkeitsproblemen im Bereich der Poor Man's Propositional Dependence Logic und stellen einen Aufzählungsalgorithmus mit FPT Delay vor, der mit exponentiellem Platz arbeitet. Dies ist einer der ersten Aufzählungsalgorithmen im Bereich der Teamlogiken. Abschließend zeigen wir, wie dieser Algorithmus so modifiziert werden kann, dass nur polynomieller Speicherplatz benötigt wird, bezahlen jedoch diese Einsparung mit einem Anstieg des Delays auf inkrementelle FPT Zeit (IncFPT)

    Complexity of Token Swapping and its Variants

    Full text link
    In the Token Swapping problem we are given a graph with a token placed on each vertex. Each token has exactly one destination vertex, and we try to move all the tokens to their destinations, using the minimum number of swaps, i.e., operations of exchanging the tokens on two adjacent vertices. As the main result of this paper, we show that Token Swapping is W[1]W[1]-hard parameterized by the length kk of a shortest sequence of swaps. In fact, we prove that, for any computable function ff, it cannot be solved in time f(k)no(k/logk)f(k)n^{o(k / \log k)} where nn is the number of vertices of the input graph, unless the ETH fails. This lower bound almost matches the trivial nO(k)n^{O(k)}-time algorithm. We also consider two generalizations of the Token Swapping, namely Colored Token Swapping (where the tokens have different colors and tokens of the same color are indistinguishable), and Subset Token Swapping (where each token has a set of possible destinations). To complement the hardness result, we prove that even the most general variant, Subset Token Swapping, is FPT in nowhere-dense graph classes. Finally, we consider the complexities of all three problems in very restricted classes of graphs: graphs of bounded treewidth and diameter, stars, cliques, and paths, trying to identify the borderlines between polynomial and NP-hard cases.Comment: 23 pages, 7 Figure

    Don't Be Strict in Local Search!

    Full text link
    Local Search is one of the fundamental approaches to combinatorial optimization and it is used throughout AI. Several local search algorithms are based on searching the k-exchange neighborhood. This is the set of solutions that can be obtained from the current solution by exchanging at most k elements. As a rule of thumb, the larger k is, the better are the chances of finding an improved solution. However, for inputs of size n, a na\"ive brute-force search of the k-exchange neighborhood requires n to the power of O(k) time, which is not practical even for very small values of k. Fellows et al. (IJCAI 2009) studied whether this brute-force search is avoidable and gave positive and negative answers for several combinatorial problems. They used the notion of local search in a strict sense. That is, an improved solution needs to be found in the k-exchange neighborhood even if a global optimum can be found efficiently. In this paper we consider a natural relaxation of local search, called permissive local search (Marx and Schlotter, IWPEC 2009) and investigate whether it enhances the domain of tractable inputs. We exemplify this approach on a fundamental combinatorial problem, Vertex Cover. More precisely, we show that for a class of inputs, finding an optimum is hard, strict local search is hard, but permissive local search is tractable. We carry out this investigation in the framework of parameterized complexity.Comment: (author's self-archived copy
    corecore