652 research outputs found

    Maintaining Arc Consistency with Multiple Residues

    Get PDF
    International audienceExploiting residual supports (or residues) has proved to be one of the most cost-effective approaches for Maintaining Arc Consistency during search (MAC). While MAC based on optimal AC algorithm may have better theoretical time complexity in some cases, in practice the overhead for maintaining required data structure during search outweighs the benefit, not to mention themore complicated implementation. Implementing MAC with residues, on the other hand, is trivial. In this paper we extend previous work on residues and investigate the use of multiple residues during search. We first give a theoretical analysis of residue-based algorithms that explains their good practical performance. We then propose several heuristics on how to deal with multiple residues. Finally, our empirical study shows that with a proper and limited number of residues, many constraint checks can be saved. When the constraint check is expensive or a problem is hard, the multiple residues approach is competitive in both the number of constraint checks and cpu time

    Designing and Optimizing Representations for Non-Binary Constraints

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Consistency techniques in constraint networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Domain value mutation and other techniques for constraint satisfaction problems

    Get PDF
    The term Constraint Satisfaction Problem (CSP) refers to a class of NP-complete problems, a collection of difficult problems for which no fast solution is known. The standard definition of a CSP involves variables, values, and constraints: each variable must be assigned a value from a designated group of possible values (also known as the variable’s domain), while a constraint on a set of variables indicates permissible combinations of values for these variables. Given a CSP, an important objective is to query whether it has a solution — an assignment of each variable to a value such that all constraints are satisfied. Solving a CSP usually requires chronological backtracking search that interleaves variable assignments with various kinds of inferences in order to reduce the search space. This dissertation comprises two parts. The first part deals with a modification of the classical CSP model that allows a value to be broken up and multiple values to be combined. The second part deals with generalized arc consistency algorithms. Both parts share a common theme in that extensional constraints --‐ the most basic expression possible for constraints --- play the central role. Despite being an important class, extensional constraints have received much less attention recently as most efforts have been channelled toward identifying new types of specialized constraints and coming up with corresponding algorithms. Regardless, improvements to algorithms for extensional constraints are more fundamental. This dissertation will attempt to improve existing techniques and algorithms for extensional constraints by examining them critically from the bottom up and approaching them from a novel direction

    On Path Consistency for Binary Constraint Satisfaction Problems

    Get PDF
    Constraint satisfaction problems (CSPs) provide a flexible and powerful framework for modeling and solving many decision problems of practical importance. Consistency properties and the algorithms for enforcing them on a problem instance are at the heart of Constraint Processing and best distinguish this area from other areas concerned with the same combinatorial problems. In this thesis, we study path consistency (PC) and investigate several algorithms for enforcing it on binary finite CSPs. We also study algorithms for enforcing consistency properties that are related to PC but are stronger or weaker than PC. We identify and correct errors in the literature and settle an open question. We propose two improvements that we apply to the well-known algorithms PC-8 and PC-2001, yielding PC-8+ and PC-2001+. Further, we propose a new algorithm for enforcing partial path consistency, σ-∆-PPC, which generalizes features of the well-known algorithms DPC and PPC. We evaluate over fifteen different algorithms on both benchmark and randomly generated binary problems to empirically demonstrate the effectiveness of our approach. Adviser: Berthe Y. Choueir

    Implementation and Applications of Ad Hoc Constraints

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier

    CIRSS vertical data integration, San Bernardino study

    Get PDF
    The creation and use of a vertically integrated data base, including LANDSAT data, for local planning purposes in a portion of San Bernardino County, California are described. The project illustrates that a vertically integrated approach can benefit local users, can be used to identify and rectify discrepancies in various data sources, and that the LANDSAT component can be effectively used to identify change, perform initial capability/suitability modeling, update existing data, and refine existing data in a geographic information system. Local analyses were developed which produced data of value to planners in the San Bernardino County Planning Department and the San Bernardino National Forest staff

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier
    corecore