1,906 research outputs found

    Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic Reasoning

    Get PDF
    Boolean propositional satisfiability (SAT) and probabilistic reasoning represent two core problems in AI. Backtracking based algorithms have been applied in both problems. In this thesis, I investigate structure-based techniques for solving real world SAT and Bayesian networks, such as software testing and medical diagnosis instances. When solving a SAT instance using backtracking search, a sequence of decisions must be made as to which variable to branch on or instantiate next. Real world problems are often amenable to a divide-and-conquer strategy where the original instance is decomposed into independent sub-problems. Existing decomposition techniques are based on pre-processing the static structure of the original problem. I propose a dynamic decomposition method based on hypergraph separators. Integrating this dynamic separator decomposition into the variable ordering of a modern SAT solver leads to speedups on large real world SAT problems. Encoding a Bayesian network into a CNF formula and then performing weighted model counting is an effective method for exact probabilistic inference. I present two encodings for improving this approach with noisy-OR and noisy-MAX relations. In our experiments, our new encodings are more space efficient and can speed up the previous best approaches over two orders of magnitude. The ability to solve similar problems incrementally is critical for many probabilistic reasoning problems. My aim is to exploit the similarity of these instances by forwarding structural knowledge learned during the analysis of one instance to the next instance in the sequence. I propose dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the evidence, parameter, and structure of the network change. Experimental results show that my approach yields significant improvements over previous model counting approaches on multiple challenging Bayesian network instances

    Tree traversals with task-memory affinities

    Get PDF
    We study the complexity of traversing tree-shaped workflows whose tasks require large I/O files. We target a heterogeneous architecture with two resource types, each with a different memory, such as a multicore node equipped with a dedicated accelerator (FPGA or GPU). The tasks in the workflow are colored according to their type and can be processed if all their input and output files can be stored in the corresponding memory. The amount of used memory of each type at a given execution step strongly depends upon the ordering in which the tasks are executed, and upon when communications between both memories are scheduled. The objective is to determine an efficient traversal that minimizes the maximum amount of memory of each type needed to traverse the whole tree. In this paper, we establish the complexity of this two-memory scheduling problem, and provide inapproximability results. In addition, we design several heuristics, based on both post-order and general traversals, and we evaluate them on a comprehensive set of tree graphs, including random trees as well as assembly trees arising in the context of sparse matrix factorizations.Dans ce rapport, nous nous intéressons à la complexité du traitement d'arbres de tâches utilisant de gros fichiers d'entrée et de sortie. Nous nous focalisons sur une architecture hétérogène avec deux types de ressources, utilisant chacune une mémoire spécifique, comme par exemple un noeud multicore équipé d'un accélérateur (FPGA ou GPU). Les tâches de l'arbre sont colorées suivant leur type et peuvent être exécutées si tous leurs fichiers d'entrée et de sortie peuvent être stockés dans la mémoire correspondante. La quantité de mémoire de chaque type utilisée à une étape donnée de l'exécution dépend fortement de l'ordre dans lequel les tâches sont exécutées et du moment où sont effectuées les communications entre les deux mémoires. L'objectif est de déterminer un ordonnancement efficace qui minimise la quantité de mémoire de chaque type nécessaire pour traiter l'arbre entier. Dans ce rapport, nous établissons la complexité de ce problème d'ordonnancement à deux mémoires et nous fournissons des résultats d'inapproximabilité. De plus, nous proposons plusieurs heuristiques, fondées à la fois sur des traversées d'arbres en profondeur et des traversées générales, que nous évaluons sur un ensemble complet d'arbres, comprenant des arbres aléatoires ainsi que des arbres rencontrés dans le domaine de la factorisation de matrices creuses

    Reasoning with imprecise trade-offs in decision making under certainty and uncertainty

    Get PDF
    In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency

    Learning And Decision Making In Groups

    Get PDF
    Many important real-world decision-making problems involve group interactions among individuals with purely informational interactions. Such situations arise for example in jury deliberations, expert committees, medical diagnoses, etc. We model the purely informational interactions of group members, where they receive private information and act based on that information while also observing other people\u27s beliefs or actions. In the first part of the thesis, we address the computations that a rational (Bayesian) decision-maker should undertake to realize her optimal actions, maximizing her expected utility given all available information at every decision epoch. We use an approach called iterated eliminations of infeasible signals (IEIS) to model the thinking process as well as the calculations of a Bayesian agent in a group decision scenario. Accordingly, as the Bayesian agent attempts to infer the true state of the world from her sequence of observations, she recursively refines her belief about the signals that other players could have observed and beliefs that they would have hold given the assumption that other players are also rational. We show that IEIS algorithm runs in exponential time; however, when the group structure is a partially ordered set the Bayesian calculations simplify and polynomial-time computation of the Bayesian recommendations is possible. We also analyze the computational complexity of the Bayesian belief formation in groups and show that it is NP-hard. We investigate the factors underlying this computational complexity and show how belief calculations simplify in special network structures or cases with strong inherent symmetries. We finally give insights about the statistical efficiency (optimality) of the beliefs and its relations to computational efficiency. In the second part, we propose the no-recall model of inference for heuristic decision-making that is rooted in the Bayes rule but avoids the complexities of rational inference in group interactions. Accordingly to this model, the group members behave rationally at the initiation of their interactions with each other; however, in the ensuing decision epochs, they rely on heuristics that replicate their experiences from the first stage and can be justified as optimal responses to simplified versions of their complex environments. We study the implications of the information structure, together with the properties of the probability distributions, which determine the structure of the so-called ``Bayesian heuristics\u27\u27 that the agents follow in this model. We also analyze the group decision outcomes in two classes of linear action updates and log-linear belief updates and show that many inefficiencies arise in group decisions as a result of repeated interactions between individuals, leading to overconfident beliefs as well as choice-shifts toward extreme actions. Nevertheless, balanced regular structures demonstrate a measure of efficiency in terms of aggregating the initial information of individuals. Finally, we extend this model to a case where agents are exposed to a stream of private data in addition to observing each other\u27s actions and analyze properties of learning and convergence under the no-recall framework

    New Formal Methods for Automotive Configuration

    Get PDF
    Die Komplexität der Automobilkonfiguration hat in den letzten Jahrzehnten extrem zugenommen. Ein typischer deutscher Premiumhersteller kann bis zu 10^80 Varianten eines einzigen Fahrzeugmodells bauen. Dieser Variantenreichtum muss jedoch entlang der gesamten Prozesskette—vom Produktentstehungsprozess bis hin zur Fertigung im Werk—verwaltet und beherrscht werden. Hierzu müssen von Experten einerseits die vom Kunden bestellbaren Fahrzeuge dokumentiert werden (High Level Regelwerk), andererseits müssen diesen Fahrzeugen physikalische Teile, Steuergeräte und Softwarekonfigurationen zugeordnet werden (Low Level Regelwerk). Die vorliegende Arbeit führt einen neuen generischen Formalismus für Konfigurationsdaten in der Automobilindustrie ein und präsentiert einen ausführlichen Überblick über die in der Industrie vorkommenden Prüfmöglichkeiten. In verschiedenen Industriekooperationen mit z.B. Audi, BMW, Daimler, Opel und VW wurde verifiziert, dass dieser Formalismus auf diese Hersteller übertragbar ist. Viele der bestehenden Prüfalgorithmen werden in dieser Dissertation entscheidend optimiert und werden im Rahmen des neuen generischen Frameworks formuliert. Es werden neue Prüf- und Analysemöglichkeiten auf Konfigurationsdaten vorgestellt. Dies sind unter anderem das Zählen baubarer Fahrzeuge, die Berechnung minimaler und maximaler Kundenorders oder die Berechnung von direkten Zwängen in der Konfigurationsbasis. Ein Hauptbeitrag dieser Arbeit ist die Einführung der Booleschen Quantorenelimination in der Automobilkonfiguration. Während die Quantorenelimination bisher vor allem im Bereich des symbolischen Modelcheckings zu finden war, werden hier zwei Anwendungen in der Automobilindustrie identifiziert, die großes Interesse in den industriellen Kooperationen erweckt haben. Es werden verschiedene Ansätze zur Booleschen Quantorenelimination vorgestellt und bezüglich der Anwendungen evaluiert. Im Rahmen dieser Arbeit entstand die Softwarebibliothek AutoLib, die die vorgestellten Algorithmen implementiert und vor allem einen neuen SAT Solver mit sich bringt, der sowohl Inkrementalität und Dekrementalität, als auch das sogenannte Proof Tracing, also das Aufzeichnen von Beweisen bei Nicht-Erfüllbarkeit, implementiert. Nach unserem Wissen ist dies der einzige SAT Solver, der diese beiden Funktionen auch in Kombination unterstützt. AutoLib wird aktuell in einem Produktivsystem bei BMW sowie in Prototypen bei Audi/VW und bei Daimler eingesetzt. Alle Algorithmen, die in dieser Arbeit präsentiert werden, wurden in einer Mach- barkeitsstudie bei BMW in den Jahren 2012 und 2013 implementiert und auf ihre industrielle Einsetzbarkeit hin verifiziert. Ein Produktivsystem, das Teile dieser Algorithmen umfasst und auf AutoLib basiert, hatte im Mai 2014 GoLive bei BMW

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier

    Multi-objective optimization in graphical models

    Get PDF
    Many real-life optimization problems are combinatorial, i.e. they concern a choice of the best solution from a finite but exponentially large set of alternatives. Besides, the solution quality of many of these problems can often be evaluated from several points of view (a.k.a. criteria). In that case, each criterion may be described by a different objective function. Some important and well-known multicriteria scenarios are: · In investment optimization one wants to minimize risk and maximize benefits. · In travel scheduling one wants to minimize time and cost. · In circuit design one wants to minimize circuit area, energy consumption and maximize speed. · In knapsack problems one wants to minimize load weight and/or volume and maximize its economical value. The previous examples illustrate that, in many cases, these multiple criteria are incommensurate (i.e., it is difficult or impossible to combine them into a single criterion) and conflicting (i.e., solutions that are good with respect one criterion are likely to be bad with respect to another). Taking into account simultaneously the different criteria is not trivial and several notions of optimality have been proposed. Independently of the chosen notion of optimality, computing optimal solutions represents an important current research challenge. Graphical models are a knowledge representation tool widely used in the Artificial Intelligence field. They seem to be specially suitable for combinatorial problems. Roughly, graphical models are graphs in which nodes represent variables and the (lack of) arcs represent conditional independence assumptions. In addition to the graph structure, it is necessary to specify its micro-structure which tells how particular combinations of instantiations of interdependent variables interact. The graphical model framework provides a unifying way to model a broad spectrum of systems and a collection of general algorithms to efficiently solve them. In this Thesis we integrate multi-objective optimization problems into the graphical model paradigm and study how algorithmic techniques developed in the graphical model context can be extended to multi-objective optimization problems. As we show, multiobjective optimization problems can be formalized as a particular case of graphical models using the semiring-based framework. It is, to the best of our knowledge, the first time that graphical models in general, and semiring-based problems in particular are used to model an optimization problem in which the objective function is partially ordered. Moreover, we show that most of the solving techniques for mono-objective optimization problems can be naturally extended to the multi-objective context. The result of our work is the mathematical formalization of multi-objective optimization problems and the development of a set of multiobjective solving algorithms that have been proved to be efficient in a number of benchmarks.Muchos problemas reales de optimización son combinatorios, es decir, requieren de la elección de la mejor solución (o solución óptima) dentro de un conjunto finito pero exponencialmente grande de alternativas. Además, la mejor solución de muchos de estos problemas es, a menudo, evaluada desde varios puntos de vista (también llamados criterios). Es este caso, cada criterio puede ser descrito por una función objetivo. Algunos escenarios multi-objetivo importantes y bien conocidos son los siguientes: · En optimización de inversiones se pretende minimizar los riesgos y maximizar los beneficios. · En la programación de viajes se quiere reducir el tiempo de viaje y los costes. · En el diseño de circuitos se quiere reducir al mínimo la zona ocupada del circuito, el consumo de energía y maximizar la velocidad. · En los problemas de la mochila se quiere minimizar el peso de la carga y/o el volumen y maximizar su valor económico. Los ejemplos anteriores muestran que, en muchos casos, estos criterios son inconmensurables (es decir, es difícil o imposible combinar todos ellos en un único criterio) y están en conflicto (es decir, soluciones que son buenas con respecto a un criterio es probable que sean malas con respecto a otra). Tener en cuenta de forma simultánea todos estos criterios no es trivial y para ello se han propuesto diferentes nociones de optimalidad. Independientemente del concepto de optimalidad elegido, el cómputo de soluciones óptimas representa un importante desafío para la investigación actual. Los modelos gráficos son una herramienta para la represetanción del conocimiento ampliamente utilizados en el campo de la Inteligencia Artificial que parecen especialmente indicados en problemas combinatorios. A grandes rasgos, los modelos gráficos son grafos en los que los nodos representan variables y la (falta de) arcos representa la interdepencia entre variables. Además de la estructura gráfica, es necesario especificar su (micro-estructura) que indica cómo interactúan instanciaciones concretas de variables interdependientes. Los modelos gráficos proporcionan un marco capaz de unificar el modelado de un espectro amplio de sistemas y un conjunto de algoritmos generales capaces de resolverlos eficientemente. En esta tesis integramos problemas de optimización multi-objetivo en el contexto de los modelos gráficos y estudiamos cómo diversas técnicas algorítmicas desarrolladas dentro del marco de los modelos gráficos se pueden extender a problemas de optimización multi-objetivo. Como mostramos, este tipo de problemas se pueden formalizar como un caso particular de modelo gráfico usando el paradigma basado en semi-anillos (SCSP). Desde nuestro conocimiento, ésta es la primera vez que los modelos gráficos en general, y el paradigma basado en semi-anillos en particular, se usan para modelar un problema de optimización cuya función objetivo está parcialmente ordenada. Además, mostramos que la mayoría de técnicas para resolver problemas monoobjetivo se pueden extender de forma natural al contexto multi-objetivo. El resultado de nuestro trabajo es la formalización matemática de problemas de optimización multi-objetivo y el desarrollo de un conjunto de algoritmos capaces de resolver este tipo de problemas. Además, demostramos que estos algoritmos son eficientes en un conjunto determinado de benchmarks
    corecore