1,056 research outputs found

    Some Enhancement Methods For Backtracking-Search In Solving Multiple Permutation Problems

    Full text link
    In this dissertation, we present some enhancement methods for backtracking-search in solving multiple permutation problems. Some well-known NP-complete multiple permutation problems are Quasigroup Completion Problem and Sudoku. Multiple permutation problems have been getting a lot of attention in the literature in the recent years due to having a highly structured nature and being a challenging combinatorial search problem. Furthermore, it has been shown that many real-world problems in scheduling and experimental design take the form of multiple permutation problems. Therefore, it has been suggested that they can be used as a benchmark problem to test various enhancement methods for solving constraint satisfaction problems. Then it is hoped that the insight gained from studying them can be applied to other hard structured as well as unstructured problems. Our supplementary and novel enhancement methods for backtracking-search in solving these multiple permutation problems can be summarized as follows: We came up with a novel way to encode multiple permutation problems and then we designed and developed an arc-consistency algorithm that is tailored towards this modeling. We implemented five versions of this arc-consistency algorithm where the last version eliminates almost all of the possible propagation redundancy. Then we introduced the novel notion of interlinking dynamic variable ordering with dynamic value ordering, where the dynamic value ordering is also used as a second tie-breaker for the dynamic variable ordering. We also proposed the concept of integrating dynamic variable ordering and dynamic value ordering into an arc-consistency algorithm by using greedy counting assertions. We developed the concept of enforcing local-consistency between variables from different redundant models of the problem. Finally, we introduced an embarrassingly parallel task distribution process at the beginning of the search. We theoretically proved that the limited form of the Hall\u27s theorem is enforced by our modeling of the multiple permutation problems. We showed with our empirical results that the ``fail-first principle is confirmed in terms of minimizing the total number of explored nodes, but is refuted in terms of minimizing the depth of the search tree when finding a single solution, which correlates with previously published results. We further showed that the performance (total number instances solved at the phase transition point within a given time limit) of a given search heuristic is closely related to the underlying pruning algorithm that is being employed to maintain some level of local-consistency during backtracking-search. We also extended the previously established hypothesis, which stated that the second peak of hardness for NP-complete problems is algorithm dependent, to second peak of hardness for NP-complete problems is also search-heuristic dependent. Then we showed with our empirical results that several of our enhancement methods on backtracking-search perform better than the constraint solvers MAC-LAH and Minion as well as the SAT solvers Satz and MiniSat for previously tested instances of multiple permutation problems on these solvers

    Symmetry Breaking for Answer Set Programming

    Full text link
    In the context of answer set programming, this work investigates symmetry detection and symmetry breaking to eliminate symmetric parts of the search space and, thereby, simplify the solution process. We contribute a reduction of symmetry detection to a graph automorphism problem which allows to extract symmetries of a logic program from the symmetries of the constructed coloured graph. We also propose an encoding of symmetry-breaking constraints in terms of permutation cycles and use only generators in this process which implicitly represent symmetries and always with exponential compression. These ideas are formulated as preprocessing and implemented in a completely automated flow that first detects symmetries from a given answer set program, adds symmetry-breaking constraints, and can be applied to any existing answer set solver. We demonstrate computational impact on benchmarks versus direct application of the solver. Furthermore, we explore symmetry breaking for answer set programming in two domains: first, constraint answer set programming as a novel approach to represent and solve constraint satisfaction problems, and second, distributed nonmonotonic multi-context systems. In particular, we formulate a translation-based approach to constraint answer set solving which allows for the application of our symmetry detection and symmetry breaking methods. To compare their performance with a-priori symmetry breaking techniques, we also contribute a decomposition of the global value precedence constraint that enforces domain consistency on the original constraint via the unit-propagation of an answer set solver. We evaluate both options in an empirical analysis. In the context of distributed nonmonotonic multi-context system, we develop an algorithm for distributed symmetry detection and also carry over symmetry-breaking constraints for distributed answer set programming.Comment: Diploma thesis. Vienna University of Technology, August 201

    A review of literature on parallel constraint solving

    Get PDF
    As multicore computing is now standard, it seems irresponsible for constraints researchers to ignore the implications of it. Researchers need to address a number of issues to exploit parallelism, such as: investigating which constraint algorithms are amenable to parallelisation; whether to use shared memory or distributed computation; whether to use static or dynamic decomposition; and how to best exploit portfolios and cooperating search. We review the literature, and see that we can sometimes do quite well, some of the time, on some instances, but we are far from a general solution. Yet there seems to be little overall guidance that can be given on how best to exploit multicore computers to speed up constraint solving. We hope at least that this survey will provide useful pointers to future researchers wishing to correct this situation

    Exploiting replication in distributed systems

    Get PDF
    Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs

    Techniques for Bundling the Solution Space of Finite Constraint Satisfaction Problems

    Get PDF
    We study the backtrack-search procedure with forward checking (FCBT) for finding all solutions to a finite Constraint Satisfaction Problem (CSP). We describe how to use dynamic interchangeability to enhance the performance of search and represent the solution space in a compact manner. We evaluate this strategy (FC-DNPI) in terms of the numbers of nodes visited, constraints checked, and solution bundles generated by comparing it, theoretically and empirically, to other search strategies. We show that FC-DNPI is equivalent to search with the Cross Product Representation (FC-CPR) of [Hubbe and Freuder 1992] in terms of the numbers of solution bundles and constraint checks, while it reduces the number of nodes visited. We establish that both strategies are always superior to FC-BT in terms of all three criteria and dynamic bundling is always beneficial. Further, we compare FC-DNPI to the search procedure of [Haselböck 1993], which exploits static, pre-computed interchangeability relations. We show that the former never generates more solution bundles nor expands more nodes than the latter, and often reduces the number of constraint checks. We also propose, without evaluating them, amendments to the strategy of [Haselböck 1993] to improve its performance and reduce the number of constraint checks

    Rehearsal Scheduling Problem

    Get PDF
    Scheduling is a common task that plays a crucial role in many industries such as manufacturing or servicing. In a competitive environment, effective scheduling is one of the key factors to reduce cost and increase productivity. Therefore, scheduling problems have been studied by many researchers over the past thirty years. Rehearsal scheduling problem (RSP) is similar to the popular resource-constrained project scheduling problem (RCPSP); however, it does not have activity precedence constraints and the resources’ availabilities are not fixed during processing time. RSP can be used to schedule rehearsal in theatre industry or to schedule group scheduling when each member has different sets of available time. In this report, three different approaches are proposed to solve RSP including Constraint Programming, Integer Programming, and Schedule Generation Schemes

    Rigorous solution techniques for numerical constraint satisfaction problems

    Get PDF
    A constraint satisfaction problem (e.g., a system of equations and inequalities) consists of a finite set of constraints specifying which value combinations from given variable domains are admitted. It is called numerical if its variable domains are continuous. Such problems arise in many applications, but form a difficult problem class since they are NP-hard. Solving a constraint satisfaction problem is to find one or more value combinations satisfying all its constraints. Numerical computations on floating-point numbers in computers often suffer from rounding errors. The rigorous control of rounding errors during numerical computations is highly desired in many applications because it would benefit the quality and reliability of the decisions based on the solutions found by the computations. Various aspects of rigorous numerical computations in solving constraint satisfaction problems are addressed in this thesis: search, constraint propagation, combination of inclusion techniques, and post-processing. The solution of a constraint satisfaction problem is essentially performed by a search. In this thesis, we propose a new complete search technique (i.e., it can find all solutions within a predetermined tolerance) for numerical constraint satisfaction problems. This technique is general and can be used in place of branching steps in most branch-and-prune methods. Moreover, this new technique speeds up the most recent general search strategy (often by an order of magnitude) and provides a concise representation of solutions. To make a constraint satisfaction problem easier to solve, a major approach, called constraint propagation, in the constraint programming1 field is often used to reduce the variable domains (by discarding redundant value combinations from the domains). Basing on directed acyclic graphs, we propose a new constraint propagation technique and a method for coordinating constraint propagation and search. More importantly, we propose a novel generic scheme for combining multiple inclusion techniques2 in numerical constraint propagation. This scheme allows bringing into the constraint propagation framework the strengths of various techniques coming from different fields. To illustrate the flexibility and efficiency of the generic scheme, we base on this scheme and devise several specific combination strategies for rigorous numerical constraint propagation using interval constraint propagation, interval arithmetic, affine arithmetic, and linear programming. Our experiments show that the new propagation techniques outperform previously available methods by 1 to 4 orders of magnitude or more in speed. We also propose several post-processing techniques for the representation of continuums of solutions. Based on connectedness, they allow grouping each cluster of connected solution subsets into a larger subset, thus allowing getting additional grouping information. Potentially, these techniques enable interval-based solution techniques to be alternatives to bounding-volume techniques in applications such as collision detection and interactive graphics. __________________________________________________ 1 Constraint programming is an approach to programming that relies on both reasoning and computing. 2 An inclusion technique is to include a set of interest into enclosures. It is also called an enclosure technique

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite
    corecore