10 research outputs found

    Constant-degree graph expansions that preserve the treewidth

    Full text link
    Many hard algorithmic problems dealing with graphs, circuits, formulas and constraints admit polynomial-time upper bounds if the underlying graph has small treewidth. The same problems often encourage reducing the maximal degree of vertices to simplify theoretical arguments or address practical concerns. Such degree reduction can be performed through a sequence of splittings of vertices, resulting in an _expansion_ of the original graph. We observe that the treewidth of a graph may increase dramatically if the splittings are not performed carefully. In this context we address the following natural question: is it possible to reduce the maximum degree to a constant without substantially increasing the treewidth? Our work answers the above question affirmatively. We prove that any simple undirected graph G=(V, E) admits an expansion G'=(V', E') with the maximum degree <= 3 and treewidth(G') <= treewidth(G)+1. Furthermore, such an expansion will have no more than 2|E|+|V| vertices and 3|E| edges; it can be computed efficiently from a tree-decomposition of G. We also construct a family of examples for which the increase by 1 in treewidth cannot be avoided.Comment: 12 pages, 6 figures, the main result used by quant-ph/051107

    Finding and Counting Permutations via CSPs

    Get PDF
    Permutation patterns and pattern avoidance have been intensively studied in combinatorics and computer science, going back at least to the seminal work of Knuth on stack-sorting (1968). Perhaps the most natural algorithmic question in this area is deciding whether a given permutation of length n contains a given pattern of length k. In this work we give two new algorithms for this well-studied problem, one whose running time is n^{k/4 + o(k)}, and a polynomial-space algorithm whose running time is the better of O(1.6181^n) and O(n^{k/2 + 1}). These results improve the earlier best bounds of n^{0.47k + o(k)} and O(1.79^n) due to Ahal and Rabinovich (2000) resp. Bruner and Lackner (2012) and are the fastest algorithms for the problem when k in Omega(log{n}). We show that both our new algorithms and the previous exponential-time algorithms in the literature can be viewed through the unifying lens of constraint-satisfaction. Our algorithms can also count, within the same running time, the number of occurrences of a pattern. We show that this result is close to optimal: solving the counting problem in time f(k) * n^{o(k/log{k})} would contradict the exponential-time hypothesis (ETH). For some special classes of patterns we obtain improved running times. We further prove that 3-increasing and 3-decreasing permutations can, in some sense, embed arbitrary permutations of almost linear length, which indicates that an algorithm with sub-exponential running time is unlikely, even for patterns from these restricted classes

    Finding and Counting Permutations via {CSPs}

    Get PDF

    Dynamic Backtracking

    Full text link
    Because of their occasional need to return to shallow points in a search tree, existing backtracking methods can sometimes erase meaningful progress toward solving a search problem. In this paper, we present a method by which backtrack points can be moved deeper in the search space, thereby avoiding this difficulty. The technique developed is a variant of dependency-directed backtracking that uses only polynomial space while still providing useful control information and retaining the completeness guarantees provided by earlier approaches.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Finding and Counting Permutations via {CSPs}

    Get PDF

    Constraint Based System-Level Diagnosis of Multiprocessors

    Get PDF
    Massively parallel multiprocessors induce new requirements for system-level fault diagnosis, like handling a huge number of processing elements in an inhomogeneous system. Traditional diagnostic models (like PMC, BGM, etc.) are insufficient to fulfill all of these requirements. This paper presents a novel modelling technique, based on a special area of artificial intelligence (AI) methods: constraint satisfaction (CS). The constraint based approach is able to handle functional faults in a similar way to the Russel-Kime model. Moreover, it can use multiple-valued logic to deal with system components having multiple fault modes. The resolution of the produced models can be adjusted to fit the actual diagnostic goal. Consequently, constrint based methods are applicable to a much wider range of multiprocessor architectures than earlier models. The basic problem of system-level diagnosis, syndrome decoding, can be easily transformed into a constraint satisfaction problem (CSP). Thus, the diagnosis algorithm can be derived from the related constraint solving algorithm. Different abstraction leveles can be used for the various diagnosis resolutions, employing the same methodology. As examples, two algorithms are described in the paper; both of them is intended for the Parsytec GCel massively parallel system. The centralized method uses a more elaborate system model, and provides detailed diagnostic information, suitable for off-line evaluation. The distributed method makes fast decisions for reconfiguration control, using a simplified model. Keywords system-level self-diagnosis, massively parallel computing systems, constraint satisfaction, diagnostic models, centralized and distributed diagnostic algorithms

    Probability Propagation

    Get PDF
    In this paper we give a simple account of local computation of marginal probabilities for when the joint probability distribution is given in factored form and the sets of variables involved in the factors form a hypertree. Previous expositions of such local computation have emphasized conditional probability. We believe this emphasis is misplaced. What is essential to local computation is a factorization. It is not essential that this factorization be interpreted in terms of conditional probabilities. The account given here avoids the divisions required by conditional probabilities and generalizes readily to alternative measures of subjective probability, such Dempster-Shafer or Spohnian belief functions

    High performance constraint satisfaction problem solving: State-recomputation versus state-copying.

    Get PDF
    Constraint Satisfaction Problems (CSPs) in Artificial Intelligence have been an important focus of research and have been a useful model for various applications such as scheduling, image processing and machine vision. CSPs are mathematical problems that try to search values for variables according to constraints. There are many approaches for searching solutions of non-binary CSPs. Traditionally, most CSP methods rely on a single processor. With the increasing popularization of multiple processors, parallel search methods are becoming alternatives to speed up the search process. Parallel search is a subfield of artificial intelligence in which the constraint satisfaction problem is centralized whereas the search processes are distributed among the different processors. In this thesis we present a forward checking algorithm solving non-binary CSPs by distributing different branches to different processors via message passing interface and execute it on a high performance distributed system called SHARCNET. However, the problem is how to efficiently communicate the state of the search among processors. Two communication models, namely, state-recomputation and state-copying via message passing, are implemented and evaluated. This thesis investigates the behaviour of communication from one process to another. The experimental results demonstrate that the state-recomputation model with tighter constraints obtains a better performance than the state-copying model, but when constraints become looser, the state-copying model is a better choice.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .Y364. Source: Masters Abstracts International, Volume: 44-01, page: 0417. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier
    corecore