41 research outputs found
Higher-Level Consistencies: Where, When, and How Much
Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances.
We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies.
Adviser: B.Y. Choueiry and C. Bessier
Generalized Hypertree Decomposition for solving non binary CSP with compressed table constraints
International audienc
Exploiting structure to cope with NP-hard graph problems: Polynomial and exponential time exact algorithms
An ideal algorithm for solving a particular problem always finds an optimal solution, finds such a solution for every possible instance, and finds it in polynomial time. When dealing with NP-hard problems, algorithms can only be expected to possess at most two out of these three desirable properties. All algorithms presented in this thesis are exact algorithms, which means that they always find an optimal solution. Demanding the solution to be optimal means that other concessions have to be made when designing an exact algorithm for an NP-hard problem: we either have to impose restrictions on the instances of the problem in order to achieve a polynomial time complexity, or we have to abandon the requirement that the worst-case running time has to be polynomial. In some cases, when the problem under consideration remains NP-hard on restricted input, we are even forced to do both.
Most of the problems studied in this thesis deal with partitioning the vertex set of a given graph. In the other problems the task is to find certain types of paths and cycles in graphs. The problems all have in common that they are NP-hard on general graphs. We present several polynomial time algorithms for solving restrictions of these problems to specific graph classes, in particular graphs without long induced paths, chordal graphs and claw-free graphs. For problems that remain NP-hard even on restricted input we present exact exponential time algorithms. In the design of each of our algorithms, structural graph properties have been heavily exploited. Apart from using existing structural results, we prove new structural properties of certain types of graphs in order to obtain our algorithmic results
High-Quality Hypergraph Partitioning
This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer , partition its vertex set into disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric.
Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution.
With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) levels, where is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the -level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve -way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine -way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space.
All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time
Approximate model composition for explanation generation
This thesis presents a framework for the formulation of knowledge models to sup¬
port the generation of explanations for engineering systems that are represented by the
resulting models. Such models are automatically assembled from instantiated generic
component descriptions, known as modelfragments. The model fragments are of suffi¬
cient detail that generally satisfies the requirements of information content as identified
by the user asking for explanations.
Through a combination of fuzzy logic based evidence preparation, which exploits the
history of prior user preferences, and an approximate reasoning inference engine, with
a Bayesian evidence propagation mechanism, different uncertainty sources can be han¬
dled. Model fragments, each representing structural or behavioural aspects of a com¬
ponent of the domain system of interest, are organised in a library. Those fragments
that represent the same domain system component, albeit with different representation
detail, form parts of the same assumption class in the library. Selected fragments are
assembled to form an overall system model, prior to extraction of any textual infor¬
mation upon which to base the explanations. The thesis proposes and examines the
techniques that support the fragment selection mechanism and the assembly of these
fragments into models.
In particular, a Bayesian network-based model fragment selection mechanism is de¬
scribed that forms the core of the work. The network structure is manually determined
prior to any inference, based on schematic information regarding the connectivity of
the components present in the domain system under consideration. The elicitation
of network probabilities, on the other hand is completely automated using probability
elicitation heuristics. These heuristics aim to provide the information required to select
fragments which are maximally compatible with the given evidence of the fragments
preferred by the user. Given such initial evidence, an existing evidence propagation
algorithm is employed. The preparation of the evidence for the selection of certain
fragments, based on user preference, is performed by a fuzzy reasoning evidence fab¬
rication engine. This engine uses a set of fuzzy rules and standard fuzzy reasoning
mechanisms, attempting to guess the information needs of the user and suggesting the selection of fragments of sufficient detail to satisfy such needs. Once the evidence
is propagated, a single fragment is selected for each of the domain system compo¬
nents and hence, the final model of the entire system is constructed. Finally, a highly
configurable XML-based mechanism is employed to extract explanation content from
the newly formulated model and to structure the explanatory sentences for the final
explanation that will be communicated to the user.
The framework is illustratively applied to a number of domain systems and is compared
qualitatively to existing compositional modelling methodologies. A further empirical
assessment of the performance of the evidence propagation algorithm is carried out to
determine its performance limits. Performance is measured against the number of frag¬
ments that represent each of the components of a large domain system, and the amount
of connectivity permitted in the Bayesian network between the nodes that stand for
the selection or rejection of these fragments. Based on this assessment recommenda¬
tions are made as to how the framework may be optimised to cope with real world
applications