1,359 research outputs found
Look-ahead with mini-bucket heuristics for MPE
The paper investigates the potential of look-ahead in the con-text of AND/OR search in graphical models using the Mini-Bucket heuristic for combinatorial optimization tasks (e.g., MAP/MPE or weighted CSPs). We present and analyze the complexity of computing the residual (a.k.a Bellman update) of the Mini-Bucket heuristic and show how this can be used to identify which parts of the search space are more likely to benefit from look-ahead and how to bound its overhead. We also rephrase the look-ahead computation as a graphical model, to facilitate structure exploiting inference schemes. We demonstrate empirically that augmenting Mini-Bucket heuristics by look-ahead is a cost-effective way of increasing the power of Branch-And-Bound search.Postprint (published version
Partition strategies for incremental Mini-Bucket
Los modelos en grafo probabilÃsticos, tales como los campos aleatorios de
Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la
representación de conocimiento y el razonamiento en modelos con gran número
de variables. Sin embargo, los problemas de inferencia exacta en modelos de
grafos son NP-hard en general, lo que ha causado que se produzca bastante
interés en métodos de inferencia aproximados.
El mini-bucket incremental es un marco de trabajo para inferencia aproximada
que produce como resultado lÃmites aproximados inferior y superior de la
función de partición exacta, a base de -empezando a partir de un modelo con
todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente
añadir regiones más grandes a la aproximación. Los métodos
de inferencia aproximada que existen actualmente producen lÃmites superiores
ajustados de la función de partición, pero los lÃmites inferiores suelen ser demasiado
imprecisos o incluso triviales.
El objetivo de este proyecto es investigar estrategias de partición que mejoren
los lÃmites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro
del marco de trabajo de mini-bucket incremental.
Empezamos a partir de la idea de que creemos que deberÃa ser beneficioso
razonar conjuntamente con las variables de un modelo que tienen una alta correlación,
y desarrollamos una estrategia para la selección de regiones basada en
esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas
de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra
estrategia y los comparamos con varios métodos de referencia.
Nuestros resultados indican que nuestra estrategia obtiene lÃmites inferiores
más ajustados que nuestros dos métodos de referencia. También consideramos
y descartamos dos posibles hipótesis que podrÃan explicar esta mejora.Els models en graf probabilÃstics, com bé els camps aleatoris de Markov i les
xarxes bayesianes, ofereixen poderosos marcs de treball per la representació
del coneixement i el raonament en models amb grans quantitats de variables.
Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard
en general, el qual ha provocat que es produeixi bastant d’interès en mètodes
d’inferència aproximats.
El mini-bucket incremental es un marc de treball per a l’inferència aproximada
que produeix com a resultat lÃmits aproximats inferior i superior de la
funció de partició exacta que funciona començant a partir d’un model al qual
se li han relaxat tots els constraints -és a dir, un model amb les regions més
petites possibles- i anar afegint a l’aproximació regions incrementalment més
grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen
lÃmits superiors ajustats de la funció de partició. Tanmateix, els lÃmits
inferiors acostumen a ser massa imprecisos o fins aviat trivials.
El objectiu d’aquest projecte es recercar estratègies de partició que millorin
els lÃmits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del
marc de treball del mini-bucket incremental.
La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós
per la qualitat de l’aproximació raonar conjuntament amb les variables del
model que tenen una alta correlació entre elles, i desenvolupem una estratègia
per a la selecció de regions basada en aquesta idea. Posteriorment, implementem
la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els
resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes
de referència.
Els nostres resultats indiquen que la nostra estratègia obté lÃmits inferiors
més ajustats que els nostres dos mètodes de referència. També considerem i
descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks
provide powerful frameworks for knowledge representation and reasoning
over models with large numbers of variables. Unfortunately, exact inference
problems on graphical models are generally NP-hard, which has led to signifi-
cant interest in approximate inference algorithms.
Incremental mini-bucket is a framework for approximate inference that provides
upper and lower bounds on the exact partition function by, starting from
a model with completely relaxed constraints, i.e. with the smallest possible
regions, incrementally adding larger regions to the approximation. Current
approximate inference algorithms provide tight upper bounds on the exact partition
function but loose or trivial lower bounds.
This project focuses on researching partitioning strategies that improve the
lower bounds obtained with mini-bucket elimination, working within the framework
of incremental mini-bucket.
We start from the idea that variables that are highly correlated should be
reasoned about together, and we develop a strategy for region selection based
on that idea. We implement the strategy and explore ways to improve it, and
finally we measure the results obtained using the strategy and compare them to
several baselines.
We find that our strategy performs better than both of our baselines. We
also rule out several possible explanations for the improvement
On the Practical use of Variable Elimination in Constraint Optimization Problems: 'Still-life' as a Case Study
Variable elimination is a general technique for constraint processing. It is
often discarded because of its high space complexity. However, it can be
extremely useful when combined with other techniques. In this paper we study
the applicability of variable elimination to the challenging problem of finding
still-lifes. We illustrate several alternatives: variable elimination as a
stand-alone algorithm, interleaved with search, and as a source of good quality
lower bounds. We show that these techniques are the best known option both
theoretically and empirically. In our experiments we have been able to solve
the n=20 instance, which is far beyond reach with alternative approaches
Residual-guided look-ahead in AND/OR search for graphical models
We introduce the concept of local bucket error for the mini-bucket heuristics and show how it can be used to improve the power of AND/OR search for combinatorial optimization tasks in graphical models (e.g. MAP/MPE or weighted CSPs). The local bucket error illuminates how the heuristic errors are distributed in the search space, guided by the mini-bucket heuristic. We present and analyze methods for compiling the local bucket-errors (exactly and approximately) and show that they can be used to yield an effective tool for balancing look-ahead overhead during search. This can be especially instrumental when memory is restricted, accommodating the generation of only weak compiled heuristics. We illustrate the impact of the proposed schemes in an extensive empirical evaluation for both finding exact solutions and anytime suboptimal solutions.Peer ReviewedPostprint (published version
Computational Protein Design Using AND/OR Branch-and-Bound Search
The computation of the global minimum energy conformation (GMEC) is an
important and challenging topic in structure-based computational protein
design. In this paper, we propose a new protein design algorithm based on the
AND/OR branch-and-bound (AOBB) search, which is a variant of the traditional
branch-and-bound search algorithm, to solve this combinatorial optimization
problem. By integrating with a powerful heuristic function, AOBB is able to
fully exploit the graph structure of the underlying residue interaction network
of a backbone template to significantly accelerate the design process. Tests on
real protein data show that our new protein design algorithm is able to solve
many prob- lems that were previously unsolvable by the traditional exact search
algorithms, and for the problems that can be solved with traditional provable
algorithms, our new method can provide a large speedup by several orders of
magnitude while still guaranteeing to find the global minimum energy
conformation (GMEC) solution.Comment: RECOMB 201
Limited discrepancy AND/OR search and its application to optimization tasks in graphical models
Many combinatorial problems are solved with a Depth-First search (DFS) guided by a heuristic and it is well-known that this method is very fragile with respect to heuristic mistakes. One standard way to make DFS more robust is to search by increasing number of discrepancies. This approach has been found useful in several domains where the search structure is a height-bounded OR tree. In this paper we investigate the generalization of discrepancy-based search to AND/OR search trees and propose an extension of the Limited Discrepancy Search (LDS) algorithm. We demonstrate the relevance of our proposal in the context of Graphical Models. In these problems, which can be solved with either a standard OR search tree or an AND/OR tree, we show the superiority of our approach. For a fixed number of discrepancies, the search space visited by the AND/OR algorithm strictly contains the search space visited by standard LDS, and many more nodes can be visited due to the multiplicative effect of the AND/OR decomposition. Besides, if the AND/OR tree achieves a significant size reduction with respect to the standard OR tree, the cost of each iteration of the AND/OR algorithm is asymptotically lower than in standard LDS. We report experiments on the minsum problem on different domains and show that the AND/OR version of LDS usually obtains better solutions given the same CPU time.Peer ReviewedPostprint (published version
New mini-bucket partitioning heuristics for bounding the probability of evidence
Mini-Bucket Elimination (MBE) is a well-known approximation algorithm deriving lower and upper bounds on quantities of interest over graphical models. It relies on a procedure that partitions a set of functions, called bucket, into smaller
subsets, called mini-buckets. The method has been used with a single partitioning heuristic throughout, so the impact of
the partitioning algorithm on the quality of the generated bound has never been investigated. This paper addresses this
issue by presenting a framework within which partitioning strategies can be described, analyzed and compared. We derive a new class of partitioning heuristics from first-principles geared for likelihood queries, demonstrate their impact on a number of benchmarks for probabilistic reasoning and show that the results are competitive (often superior) to state-ofthe-art bounding schemes.Postprint (published version
- …