748 research outputs found

    Partition strategies for incremental Mini-Bucket

    Get PDF
    Los modelos en grafo probabilísticos, tales como los campos aleatorios de Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la representación de conocimiento y el razonamiento en modelos con gran número de variables. Sin embargo, los problemas de inferencia exacta en modelos de grafos son NP-hard en general, lo que ha causado que se produzca bastante interés en métodos de inferencia aproximados. El mini-bucket incremental es un marco de trabajo para inferencia aproximada que produce como resultado límites aproximados inferior y superior de la función de partición exacta, a base de -empezando a partir de un modelo con todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente añadir regiones más grandes a la aproximación. Los métodos de inferencia aproximada que existen actualmente producen límites superiores ajustados de la función de partición, pero los límites inferiores suelen ser demasiado imprecisos o incluso triviales. El objetivo de este proyecto es investigar estrategias de partición que mejoren los límites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro del marco de trabajo de mini-bucket incremental. Empezamos a partir de la idea de que creemos que debería ser beneficioso razonar conjuntamente con las variables de un modelo que tienen una alta correlación, y desarrollamos una estrategia para la selección de regiones basada en esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra estrategia y los comparamos con varios métodos de referencia. Nuestros resultados indican que nuestra estrategia obtiene límites inferiores más ajustados que nuestros dos métodos de referencia. También consideramos y descartamos dos posibles hipótesis que podrían explicar esta mejora.Els models en graf probabilístics, com bé els camps aleatoris de Markov i les xarxes bayesianes, ofereixen poderosos marcs de treball per la representació del coneixement i el raonament en models amb grans quantitats de variables. Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard en general, el qual ha provocat que es produeixi bastant d’interès en mètodes d’inferència aproximats. El mini-bucket incremental es un marc de treball per a l’inferència aproximada que produeix com a resultat límits aproximats inferior i superior de la funció de partició exacta que funciona començant a partir d’un model al qual se li han relaxat tots els constraints -és a dir, un model amb les regions més petites possibles- i anar afegint a l’aproximació regions incrementalment més grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen límits superiors ajustats de la funció de partició. Tanmateix, els límits inferiors acostumen a ser massa imprecisos o fins aviat trivials. El objectiu d’aquest projecte es recercar estratègies de partició que millorin els límits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del marc de treball del mini-bucket incremental. La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós per la qualitat de l’aproximació raonar conjuntament amb les variables del model que tenen una alta correlació entre elles, i desenvolupem una estratègia per a la selecció de regions basada en aquesta idea. Posteriorment, implementem la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes de referència. Els nostres resultats indiquen que la nostra estratègia obté límits inferiors més ajustats que els nostres dos mètodes de referència. També considerem i descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks provide powerful frameworks for knowledge representation and reasoning over models with large numbers of variables. Unfortunately, exact inference problems on graphical models are generally NP-hard, which has led to signifi- cant interest in approximate inference algorithms. Incremental mini-bucket is a framework for approximate inference that provides upper and lower bounds on the exact partition function by, starting from a model with completely relaxed constraints, i.e. with the smallest possible regions, incrementally adding larger regions to the approximation. Current approximate inference algorithms provide tight upper bounds on the exact partition function but loose or trivial lower bounds. This project focuses on researching partitioning strategies that improve the lower bounds obtained with mini-bucket elimination, working within the framework of incremental mini-bucket. We start from the idea that variables that are highly correlated should be reasoned about together, and we develop a strategy for region selection based on that idea. We implement the strategy and explore ways to improve it, and finally we measure the results obtained using the strategy and compare them to several baselines. We find that our strategy performs better than both of our baselines. We also rule out several possible explanations for the improvement

    Multi-objective optimization in graphical models

    Get PDF
    Many real-life optimization problems are combinatorial, i.e. they concern a choice of the best solution from a finite but exponentially large set of alternatives. Besides, the solution quality of many of these problems can often be evaluated from several points of view (a.k.a. criteria). In that case, each criterion may be described by a different objective function. Some important and well-known multicriteria scenarios are: · In investment optimization one wants to minimize risk and maximize benefits. · In travel scheduling one wants to minimize time and cost. · In circuit design one wants to minimize circuit area, energy consumption and maximize speed. · In knapsack problems one wants to minimize load weight and/or volume and maximize its economical value. The previous examples illustrate that, in many cases, these multiple criteria are incommensurate (i.e., it is difficult or impossible to combine them into a single criterion) and conflicting (i.e., solutions that are good with respect one criterion are likely to be bad with respect to another). Taking into account simultaneously the different criteria is not trivial and several notions of optimality have been proposed. Independently of the chosen notion of optimality, computing optimal solutions represents an important current research challenge. Graphical models are a knowledge representation tool widely used in the Artificial Intelligence field. They seem to be specially suitable for combinatorial problems. Roughly, graphical models are graphs in which nodes represent variables and the (lack of) arcs represent conditional independence assumptions. In addition to the graph structure, it is necessary to specify its micro-structure which tells how particular combinations of instantiations of interdependent variables interact. The graphical model framework provides a unifying way to model a broad spectrum of systems and a collection of general algorithms to efficiently solve them. In this Thesis we integrate multi-objective optimization problems into the graphical model paradigm and study how algorithmic techniques developed in the graphical model context can be extended to multi-objective optimization problems. As we show, multiobjective optimization problems can be formalized as a particular case of graphical models using the semiring-based framework. It is, to the best of our knowledge, the first time that graphical models in general, and semiring-based problems in particular are used to model an optimization problem in which the objective function is partially ordered. Moreover, we show that most of the solving techniques for mono-objective optimization problems can be naturally extended to the multi-objective context. The result of our work is the mathematical formalization of multi-objective optimization problems and the development of a set of multiobjective solving algorithms that have been proved to be efficient in a number of benchmarks.Muchos problemas reales de optimización son combinatorios, es decir, requieren de la elección de la mejor solución (o solución óptima) dentro de un conjunto finito pero exponencialmente grande de alternativas. Además, la mejor solución de muchos de estos problemas es, a menudo, evaluada desde varios puntos de vista (también llamados criterios). Es este caso, cada criterio puede ser descrito por una función objetivo. Algunos escenarios multi-objetivo importantes y bien conocidos son los siguientes: · En optimización de inversiones se pretende minimizar los riesgos y maximizar los beneficios. · En la programación de viajes se quiere reducir el tiempo de viaje y los costes. · En el diseño de circuitos se quiere reducir al mínimo la zona ocupada del circuito, el consumo de energía y maximizar la velocidad. · En los problemas de la mochila se quiere minimizar el peso de la carga y/o el volumen y maximizar su valor económico. Los ejemplos anteriores muestran que, en muchos casos, estos criterios son inconmensurables (es decir, es difícil o imposible combinar todos ellos en un único criterio) y están en conflicto (es decir, soluciones que son buenas con respecto a un criterio es probable que sean malas con respecto a otra). Tener en cuenta de forma simultánea todos estos criterios no es trivial y para ello se han propuesto diferentes nociones de optimalidad. Independientemente del concepto de optimalidad elegido, el cómputo de soluciones óptimas representa un importante desafío para la investigación actual. Los modelos gráficos son una herramienta para la represetanción del conocimiento ampliamente utilizados en el campo de la Inteligencia Artificial que parecen especialmente indicados en problemas combinatorios. A grandes rasgos, los modelos gráficos son grafos en los que los nodos representan variables y la (falta de) arcos representa la interdepencia entre variables. Además de la estructura gráfica, es necesario especificar su (micro-estructura) que indica cómo interactúan instanciaciones concretas de variables interdependientes. Los modelos gráficos proporcionan un marco capaz de unificar el modelado de un espectro amplio de sistemas y un conjunto de algoritmos generales capaces de resolverlos eficientemente. En esta tesis integramos problemas de optimización multi-objetivo en el contexto de los modelos gráficos y estudiamos cómo diversas técnicas algorítmicas desarrolladas dentro del marco de los modelos gráficos se pueden extender a problemas de optimización multi-objetivo. Como mostramos, este tipo de problemas se pueden formalizar como un caso particular de modelo gráfico usando el paradigma basado en semi-anillos (SCSP). Desde nuestro conocimiento, ésta es la primera vez que los modelos gráficos en general, y el paradigma basado en semi-anillos en particular, se usan para modelar un problema de optimización cuya función objetivo está parcialmente ordenada. Además, mostramos que la mayoría de técnicas para resolver problemas monoobjetivo se pueden extender de forma natural al contexto multi-objetivo. El resultado de nuestro trabajo es la formalización matemática de problemas de optimización multi-objetivo y el desarrollo de un conjunto de algoritmos capaces de resolver este tipo de problemas. Además, demostramos que estos algoritmos son eficientes en un conjunto determinado de benchmarks

    Graph Sketching Against Adaptive Adversaries Applied to the Minimum Degree Algorithm

    Full text link
    Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in O(polylog(n))O(\text{polylog}(n)) time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating the interleaved queries and updates to this randomized data structure may be of independent interest.Comment: 58 pages, 3 figures. This is a substantially revised version of arXiv:1711.08446 with an emphasis on the underlying theoretical problem

    New mini-bucket partitioning heuristics for bounding the probability of evidence

    Get PDF
    Mini-Bucket Elimination (MBE) is a well-known approximation algorithm deriving lower and upper bounds on quantities of interest over graphical models. It relies on a procedure that partitions a set of functions, called bucket, into smaller subsets, called mini-buckets. The method has been used with a single partitioning heuristic throughout, so the impact of the partitioning algorithm on the quality of the generated bound has never been investigated. This paper addresses this issue by presenting a framework within which partitioning strategies can be described, analyzed and compared. We derive a new class of partitioning heuristics from first-principles geared for likelihood queries, demonstrate their impact on a number of benchmarks for probabilistic reasoning and show that the results are competitive (often superior) to state-ofthe-art bounding schemes.Postprint (published version

    A Concise Function Representation for Faster Exact MPE and Constrained Optimisation in Graphical Models

    Full text link
    We propose a novel concise function representation for graphical models, a central theoretical framework that provides the basis for many reasoning tasks. We then show how we exploit our concise representation based on deterministic finite state automata within Bucket Elimination (BE), a general approach based on the concept of variable elimination that can be used to solve many inference and optimisation tasks, such as most probable explanation and constrained optimisation. We denote our version of BE as FABE. By using our concise representation within FABE, we dramatically improve the performance of BE in terms of runtime and memory requirements. Results achieved by comparing FABE with state of the art approaches for most probable explanation (i.e., recursive best-first and structured message passing) and constrained optimisation (i.e., CPLEX, GUROBI, and toulbar2) following an established methodology confirm the efficacy of our concise function representation, showing runtime improvements of up to 5 orders of magnitude in our tests.Comment: Submitted to IEEE Transactions on Cybernetic

    Complexity Results and Approximation Strategies for MAP Explanations

    Full text link
    MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation Pr, or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP^PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NP-complete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference Pr, and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases

    Reasoning with imprecise trade-offs in decision making under certainty and uncertainty

    Get PDF
    In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency

    Practical Tractability of CSPS by Higher Level Consistency and Tree Decomposition

    Get PDF
    Constraint Satisfaction is a flexible paradigm for modeling many decision problems in Engineering, Computer Science, and Management. Constraint Satisfaction Problems (CSPs) are in general NP-complete and are usually solved with search. Research has identified various islands of tractability, which enable solving certain CSPs with backtrack-free search. For example, one sufficient condition for tractability relates the consistency level of a CSP to treewidth of the CSP\u27s constraint network. However, enforcing higher levels of consistency on a CSP may require the addition of constraints, thus altering the topology of the constraint network and increasing its treewidth. This thesis addresses the following question: How close can we approach in practice the tractability guaranteed by the relationship between the level of consistency in a CSP and the treewidth of its constraint network? To achieve practical tractability, this thesis proposes: (1) New local consistency properties and algorithms for enforcing them without adding constraints or altering the network\u27s topology; (2) Methods to enforce these consistency properties on the clusters of a tree decomposition of the CSP; and (3) Schemes to bolster the propagation between the clusters of the tree decomposition. Our empirical evaluation shows that our techniques allow us to achieve practical tractability for a wide range of problems, and that they are both applicable (i.e., require acceptable time and space) and useful (i.e., outperform other consistency properties). We theoretically characterize the proposed consistency properties and empirically evaluate our techniques on benchmark problems. Our techniques for higher level consistency exhibit their best performances on difficult benchmark problems. They solve a larger number of difficult problem instances than algorithms enforcing weaker consistency properties, and moreover they solve them in an almost backtrack-free manner. Adviser: Berthe Y. Choueir

    A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs

    Full text link
    corecore