6 research outputs found
Computing Treewidth on the GPU
We present a parallel algorithm for computing the treewidth of a graph on a GPU. We implement this algorithm in OpenCL, and experimentally evaluate its performance. Our algorithm is based on an O*(2^n)-time algorithm that explores the elimination orderings of the graph using a Held-Karp like dynamic programming approach. We use Bloom filters to detect duplicate solutions.
GPU programming presents unique challenges and constraints, such as constraints on the use of memory and the need to limit branch divergence. We experiment with various optimizations to see if it is possible to work around these issues. We achieve a very large speed up (up to 77x) compared to running the same algorithm on the CPU
A SAT Approach to Clique-Width
Clique-width is a graph invariant that has been widely studied in
combinatorics and computer science. However, computing the clique-width of a
graph is an intricate problem, the exact clique-width is not known even for
very small graphs. We present a new method for computing the clique-width of
graphs based on an encoding to propositional satisfiability (SAT) which is then
evaluated by a SAT solver. Our encoding is based on a reformulation of
clique-width in terms of partitions that utilizes an efficient encoding of
cardinality constraints. Our SAT-based method is the first to discover the
exact clique-width of various small graphs, including famous graphs from the
literature as well as random graphs of various density. With our method we
determined the smallest graphs that require a small pre-described clique-width.Comment: proofs in section 3 updated, results remain unchange
An extended depth-first search algorithm for optimal triangulation of Bayesian networks
The junction tree algorithm is currently the most popular algorithm for exact inference on Bayesian networks. To improve the time complexity of the junction tree algorithm, we need to find a triangulation with the optimal total table size. For this purpose, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of that algorithm leave much room for improvement. First, the dynamic clique maintenance allows to recompute some cliques. Second, in the worst case, the DFS algorithm explores the search space of all elimination orders, which has size n!, where n is the number of variables in the Bayesian network. To mitigate these problems, we propose an extended depth-first search (EDFS) algorithm. The new EDFS algorithm introduces the following two techniques as improvements to the DFS algorithm: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O(n2). Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does
ベイジアンネットワークにおける確率推論の高速化のための最適三角化アルゴリズムの提案
Bayesian networks are widely used probabilistic graphical models that provide a compact representation of joint probability distributions over a set of variables. A common inference task in Bayesian networks is to compute the posterior marginal distributions for the unobserved variables given some evidence variables that we have already observed. However, the inference problem is known to be NP-hard and this complexity of inference limits the usage of Bayesian networks. Many attempts to improve the inference algorithm have been made in the past two decades. Currently, the junction tree algorithm is among the most prominent exact inference algorithms. To perform efficient inference on a Bayesian network using the junction tree algorithm, it is necessary to find a triangulation of the moral graph of the Bayesian network such that the total table size is small. In this context, the total table size is used to measure the computational complexity of the junction tree inference algorithm. This thesis focuses on exact algorithms for finding a triangulation that minimizes the total table size for a given Bayesian network. For optimal triangulation, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of their algorithm leave much room for improvement. First, the dynamic clique maintenance allows the recomputation of some cliques. Second, for a Bayesian network with n variables, the DFS algorithm runs in O*(n!) time because it explores a search space of all elimination orders. To mitigate these problems, an extended depth-first search (EDFS) algorithm is proposed in this thesis. The new EDFS algorithm introduces two techniques: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O(n2). Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does.電気通信大学201
Exploiting the probability of observation for efficient Bayesian network inference
xi, 88 leaves : ill. ; 29 cmIt is well-known that the observation of a variable in a Bayesian network can affect the
effective connectivity of the network, which in turn affects the efficiency of inference.
Unfortunately, the observed variables may not be known until runtime, which limits the
amount of compile-time optimization that can be done in this regard. This thesis considers
how to improve inference when users know the likelihood of a variable being observed. It
demonstrates how these probabilities of observation can be exploited to improve existing
heuristics for choosing elimination orderings for inference. Empirical tests over a set of
benchmark networks using the Variable Elimination algorithm show reductions of up to
50% and 70% in multiplications and summations, as well as runtime reductions of up to
55%. Similarly, tests using the Elimination Tree algorithm show reductions by as much as
64%, 55%, and 50% in recursive calls, total cache size, and runtime, respectively