80 research outputs found

    Incremental Compilation of Bayesian Networks Based on Maximal Prime Subgraphs

    Get PDF

    Learning inference friendly Bayesian networks: using incremental compilation

    Get PDF

    IBIA: An Incremental Build-Infer-Approximate Framework for Approximate Inference of Partition Function

    Full text link
    Exact computation of the partition function is known to be intractable, necessitating approximate inference techniques. Existing methods for approximate inference are slow to converge for many benchmarks. The control of accuracy-complexity trade-off is also non-trivial in many of these methods. We propose a novel incremental build-infer-approximate (IBIA) framework for approximate inference that addresses these issues. In this framework, the probabilistic graphical model is converted into a sequence of clique tree forests (SCTF) with bounded clique sizes. We show that the SCTF can be used to efficiently compute the partition function. We propose two new algorithms which are used to construct the SCTF and prove the correctness of both. The first is an algorithm for incremental construction of CTFs that is guaranteed to give a valid CTF with bounded clique sizes and the second is an approximation algorithm that takes a calibrated CTF as input and yields a valid and calibrated CTF with reduced clique sizes as the output. We have evaluated our method using several benchmark sets from recent UAI competitions and our results show good accuracies with competitive runtimes

    Heuristic assignment of CPDs for probabilistic inference in junction trees

    Get PDF
    Many researches have been done for efficient computation of probabilistic queries posed to Bayesian networks (BN). One of the popular architectures for exact inference on BNs is the Junction Tree (JT) based architecture. Among all the different architectures developed, HUGIN is the most efficient JT-based architecture. The Global Propagation (GP) method used in the HUGIN architecture is arguably one of the best methods for probabilistic inference in BNs. Before the propagation, initialization is done to obtain the potential for each cluster in the JT. Then with the GP method, each cluster potential becomes cluster marginal through passing messages with its neighboring clusters. Improvements have been proposed by many researchers to make this message propagation more efficient. Still the GP method can be very slow for dense networks. As BNs are applied to larger, more complex, and realistic applications, developing more efficient inference algorithm has become increasingly important. Towards this goal, in this paper, we present some heuristics for initialization that avoids unnecessary message passing among clusters of the JT and therefore it improves the performance of the architecture by passing lesser messages

    On the relationship between satisfiability and partially observable Markov decision processes

    Get PDF
    Stochastic satisfiability (SSAT), Quantified Boolean Satisfiability (QBF) and decision-theoretic planning in finite horizon partially observable Markov decision processes (POMDPs) are all PSPACE-Complete problems. Since they are all complete for the same complexity class, I show how to convert them into one another in polynomial time and space. I discuss various properties of each encoding and how they get translated into equivalent constructs in the other encodings. An important lesson of these reductions is that the states in SSAT and flat POMDPs do not match. Therefore, comparing the scalability of satisfiability and flat POMDP solvers based on the size of the state spaces they can tackle is misleading. A new SSAT solver called SSAT-Prime is proposed and implemented. It includes improvements to watch literals, component caching and detecting symmetries with upper and lower bounds under certain conditions. SSAT-Prime is compared against a state of the art solver for probabilistic inference and a native POMDP solver on challenging benchmarks

    A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse.

    Get PDF
    The work presented in this thesis is principally concerned with the development of a method and set of tools designed to support the identification of class-based similarity in collections of object-oriented code. Attention is focused on enhancing the potential for software reuse in situations where a reuse process is either absent or informal, and the characteristics of the organisation are unsuitable, or resources unavailable, to promote and sustain a systematic approach to reuse. The approach builds on the definition of a formal, attributed, relational model that captures the inherent structure of class-based, object-oriented code. Based on code-level analysis, it relies solely on the structural characteristics of the code and the peculiarly object-oriented features of the class as an organising principle: classes, those entities comprising a class, and the intra and inter-class relationships existing between them, are significant factors in defining a two-phase similarity measure as a basis for the comparison process. Established graph-theoretic techniques are adapted and applied via this model to the problem of determining similarity between classes. This thesis illustrates a successful transfer of techniques from the domains of molecular chemistry and computer vision. Both domains provide an existing template for the analysis and comparison of structures as graphs. The inspiration for representing classes as attributed relational graphs, and the application of graph-theoretic techniques and algorithms to their comparison, arose out of a well-founded intuition that a common basis in graph-theory was sufficient to enable a reasonable transfer of these techniques to the problem of determining similarity in object-oriented code. The practical application of this work relates to the identification and indexing of instances of recurring, class-based, common structure present in established and evolving collections of object-oriented code. A classification so generated additionally provides a framework for class-based matching over an existing code-base, both from the perspective of newly introduced classes, and search "templates" provided by those incomplete, iteratively constructed and refined classes associated with current and on-going development. The tools and techniques developed here provide support for enabling and improving shared awareness of reuse opportunity, based on analysing structural similarity in past and ongoing development, tools and techniques that can in turn be seen as part of a process of domain analysis, capable of stimulating the evolution of a systematic reuse ethic

    Biomedical applications of belief networks

    Get PDF
    Biomedicine is an area in which computers have long been expected to play a significant role. Although many of the early claims have proved unrealistic, computers are gradually becoming accepted in the biomedical, clinical and research environment. Within these application areas, expert systems appear to have met with the most resistance, especially when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is necessary to provide the information needed to make rational judgements concerning the inferences the system has made. This entails an explanation of what inferences were made, how the inferences were made and how the results of the inference are to be interpreted. Furthermore there must be a consistent approach to the combining of information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses. Until recently ad hoc formalisms were seen as the only tractable approach to reasoning under uncertainty. A review of some of these formalisms suggests that they are less than ideal for the purposes of decision making. Belief networks provide a tractable way of utilising probability theory as an inference formalism by combining the theoretical consistency of probability for inference and decision making, with the ability to use the knowledge of domain experts.nowledge of domain experts. The potential of belief networks in biomedical applications has already been recog¬ nised and there has been substantial research into the use of belief networks for medical diagnosis and methods for handling large, interconnected networks. In this thesis the use of belief networks is extended to include detailed image model matching to show how, in principle, feature measurement can be undertaken in a fully probabilistic way. The belief networks employed are usually cyclic and have strong influences between adjacent nodes, so new techniques for probabilistic updating based on a model of the matching process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used to apply the belief network formalism to two application domains. The first application is model-based matching in fetal ultrasound images. The imaging modality and biological variation in the subject make model matching a highly uncertain process. A dynamic, deformable model, similar to active contour models, is used. A belief network combines constraints derived from local evidence in the image, with global constraints derived from trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of evidence occurring during the classification of objects on a cervical smear slide as part of an automated pre-screening system. A belief network provides both an explicit domain model and a mechanism for the incremental aggregation of evidence, two attributes important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features required of a decision support system with desirable qualitative features that will lead to improved acceptability of expert systems in the biomedical domain

    Knowledge compilation for online decision-making : application to the control of autonomous systems = Compilation de connaissances pour la décision en ligne : application à la conduite de systèmes autonomes

    Get PDF
    La conduite de systèmes autonomes nécessite de prendre des décisions en fonction des observations et des objectifs courants : cela implique des tâches à effectuer en ligne, avec les moyens de calcul embarqués. Cependant, il s'agit généralement de tâches combinatoires, gourmandes en temps de calcul et en espace mémoire. Réaliser ces tâches intégralement en ligne dégrade la réactivité du système ; les réaliser intégralement hors ligne, en anticipant toutes les situations possibles, nuit à son embarquabilité. Les techniques de compilation de connaissances sont susceptibles d'apporter un compromis, en déportant au maximum l'effort de calcul avant la mise en situation du système. Ces techniques consistent à traduire un problème dans un certain langage, fournissant une forme compilée de ce problème, dont la résolution est facile et la taille aussi compacte que possible. La traduction peut être très longue, mais n'est effectuée qu'une seule fois, hors ligne. Il existe de nombreux langages-cible de compilation, notamment le langage des diagrammes de décision binaires (BDDs), qui ont été utilisés avec succès dans divers domaines (model-checking, configuration, planification). L'objectif de la thèse était d'étudier l'application de la compilation de connaissances à la conduite de systèmes autonomes. Nous nous sommes intéressés à des problèmes réels de planification, qui impliquent souvent des variables continues ou à grand domaine énuméré (temps ou mémoire par exemple). Nous avons orienté notre travail vers la recherche et l'étude de langages-cible de compilation assez expressifs pour permettre de représenter de tels problèmes.Controlling autonomous systems requires to make decisions depending on current observations and objectives. This involves some tasks that must be executed online-with the embedded computational power only. However, these tasks are generally combinatory; their computation is long and requires a lot of memory space. Entirely executing them online thus compromises the system's reactivity. But entirely executing them offline, by anticipating every possible situation, can lead to a result too large to be embedded. A tradeoff can be provided by knowledge compilation techniques, which shift as much as possible of the computational effort before the system's launching. These techniques consists in a translation of a problem into some language, obtaining a compiled form of the problem, which is both easy to solve and as compact as possible. The translation step can be very long, but it is only executed once, and offline. There are numerous target compilation languages, among which the language of binary decision diagrams (BDDs), which have been successfully used in various domains of artificial intelligence, such as model-checking, configuration, or planning. The objective of the thesis was to study how knowledge compilation could be applied to the control of autonomous systems. We focused on realistic planning problems, which often involve variables with continuous domains or large enumerated domains (such as time or memory space). We oriented our work towards the search for target compilation languages expressive enough to represent such problems
    corecore