433 research outputs found

    Symbolic Exact Inference for Discrete Probabilistic Programs

    Full text link
    The computational burden of probabilistic inference remains a hurdle for applying probabilistic programming languages to practical problems of interest. In this work, we provide a semantic and algorithmic foundation for efficient exact inference on discrete-valued finite-domain imperative probabilistic programs. We leverage and generalize efficient inference procedures for Bayesian networks, which exploit the structure of the network to decompose the inference task, thereby avoiding full path enumeration. To do this, we first compile probabilistic programs to a symbolic representation. Then we adapt techniques from the probabilistic logic programming and artificial intelligence communities in order to perform inference on the symbolic representation. We formalize our approach, prove it sound, and experimentally validate it against existing exact and approximate inference techniques. We show that our inference approach is competitive with inference procedures specialized for Bayesian networks, thereby expanding the class of probabilistic programs that can be practically analyzed

    A Fast Compiler for NetKAT

    Full text link
    High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Construction of Decision Diagrams for Product Configuration

    Get PDF
    Knowledge compilation is a well-researched field focused on translating propositional logic formulas into efficient data structures that allow polynomial-time online queries related to the SAT problem. Knowledge compilation techniques can be used to partition product configuration tasks into two distinct phases: fast online processing and slow offline preprocessing. Binary Decision Diagrams (BDDs) are widely studied in this area and provide a graph representation of Boolean formulas. However, BDD construction can be time-consuming, particularly for large instances, as their size grows exponentially with the number of variables. This paper explores methods to improve BDD construction time, including optimizing variable ordering. The evaluation involves applying these techniques to formulas in Rich Conjunctive Normal Form, comparing the results with Sentential Decision Diagrams. The experiments use CAS Software AG benchmarks

    Knowledge compilation for online decision-making : application to the control of autonomous systems = Compilation de connaissances pour la décision en ligne : application à la conduite de systèmes autonomes

    Get PDF
    La conduite de systèmes autonomes nécessite de prendre des décisions en fonction des observations et des objectifs courants : cela implique des tâches à effectuer en ligne, avec les moyens de calcul embarqués. Cependant, il s'agit généralement de tâches combinatoires, gourmandes en temps de calcul et en espace mémoire. Réaliser ces tâches intégralement en ligne dégrade la réactivité du système ; les réaliser intégralement hors ligne, en anticipant toutes les situations possibles, nuit à son embarquabilité. Les techniques de compilation de connaissances sont susceptibles d'apporter un compromis, en déportant au maximum l'effort de calcul avant la mise en situation du système. Ces techniques consistent à traduire un problème dans un certain langage, fournissant une forme compilée de ce problème, dont la résolution est facile et la taille aussi compacte que possible. La traduction peut être très longue, mais n'est effectuée qu'une seule fois, hors ligne. Il existe de nombreux langages-cible de compilation, notamment le langage des diagrammes de décision binaires (BDDs), qui ont été utilisés avec succès dans divers domaines (model-checking, configuration, planification). L'objectif de la thèse était d'étudier l'application de la compilation de connaissances à la conduite de systèmes autonomes. Nous nous sommes intéressés à des problèmes réels de planification, qui impliquent souvent des variables continues ou à grand domaine énuméré (temps ou mémoire par exemple). Nous avons orienté notre travail vers la recherche et l'étude de langages-cible de compilation assez expressifs pour permettre de représenter de tels problèmes.Controlling autonomous systems requires to make decisions depending on current observations and objectives. This involves some tasks that must be executed online-with the embedded computational power only. However, these tasks are generally combinatory; their computation is long and requires a lot of memory space. Entirely executing them online thus compromises the system's reactivity. But entirely executing them offline, by anticipating every possible situation, can lead to a result too large to be embedded. A tradeoff can be provided by knowledge compilation techniques, which shift as much as possible of the computational effort before the system's launching. These techniques consists in a translation of a problem into some language, obtaining a compiled form of the problem, which is both easy to solve and as compact as possible. The translation step can be very long, but it is only executed once, and offline. There are numerous target compilation languages, among which the language of binary decision diagrams (BDDs), which have been successfully used in various domains of artificial intelligence, such as model-checking, configuration, or planning. The objective of the thesis was to study how knowledge compilation could be applied to the control of autonomous systems. We focused on realistic planning problems, which often involve variables with continuous domains or large enumerated domains (such as time or memory space). We oriented our work towards the search for target compilation languages expressive enough to represent such problems
    • …
    corecore