45 research outputs found

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    An abductive-inductive algorithm for probabilistic inductive logic programming

    No full text
    The integration of abduction and induction has lead to a variety of non-monotonic ILP systems. XHAIL is one of these systems, in which abduction is used to compute hypotheses that subsume Kernel Sets. On the other hand, Peircebayes is a recently proposed logic-based probabilistic programming approach that combines abduction with parameter learning to learn distributions of most likely explanations. In this paper, we propose an approach for integrating probabilistic inference with ILP. The basic idea is to redefine the inductive task of XHAIL as a statistical abduction, and to use Peircebayes to learn probability distribution of hypotheses. An initial evaluation of the proposed algorithm is given using synthetic data

    Active Inference and Intentional Behaviour

    Full text link
    Recent advances in theoretical biology suggest that basal cognition and sentient behaviour are emergent properties of in vitro cell cultures and neuronal networks, respectively. Such neuronal networks spontaneously learn structured behaviours in the absence of reward or reinforcement. In this paper, we characterise this kind of self-organisation through the lens of the free energy principle, i.e., as self-evidencing. We do this by first discussing the definitions of reactive and sentient behaviour in the setting of active inference, which describes the behaviour of agents that model the consequences of their actions. We then introduce a formal account of intentional behaviour, that describes agents as driven by a preferred endpoint or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behaviour using simulations. First, we simulate the aforementioned in vitro experiments, in which neuronal cultures spontaneously learn to play Pong, by implementing nested, free energy minimising processes. The simulations are then used to deconstruct the ensuing predictive behaviour, leading to the distinction between merely reactive, sentient, and intentional behaviour, with the latter formalised in terms of inductive planning. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem), that show how quickly and efficiently adaptive behaviour emerges under an inductive form of active inference.Comment: 33 pages, 9 figure

    Learning to Find Proofs and Theorems by Learning to Refine Search Strategies: The Case of Loop Invariant Synthesis

    Full text link
    We propose a new approach to automated theorem proving where an AlphaZero-style agent is self-training to refine a generic high-level expert strategy expressed as a nondeterministic program. An analogous teacher agent is self-training to generate tasks of suitable relevance and difficulty for the learner. This allows leveraging minimal amounts of domain knowledge to tackle problems for which training data is unavailable or hard to synthesize. As a specific illustration, we consider loop invariant synthesis for imperative programs and use neural networks to refine both the teacher and solver strategies

    A Subjective Logic Library Constructed Using Monadic Higher Order Functions

    Get PDF
    Subjective Logic is a recently emergent probabilistic logic system that allows for reasoning under uncertainty. Though algebraically expressive, there is a lack of software tooling to support computation, such as code libraries, calculators, and software for the development of decision support systems. With this motivation, we present a complete design for a library of opinion data structures and operators constructed from higher order functions that are capable of representing and evaluating well-formed expressions of Subjective Logic. By leveraging monads, mathematical objects from Category Theory, we have enabled our operators to detect and propagate run-time errors without sacrificing compositionality. Furthermore, we have conducted a termination analysis on the expression evaluator and a complexity analysis on a representative subset of the operators. We have also proposed and implemented extensions to the set of Subjective Logic operators. Lastly, we provide examples of how to compute the values of Subjective Logic expressions

    A Novel Neural-symbolic System under Statistical Relational Learning

    Full text link
    A key objective in field of artificial intelligence is to develop cognitive models that can exhibit human-like intellectual capabilities. One promising approach to achieving this is through neural-symbolic systems, which combine the strengths of deep learning and symbolic reasoning. However, current approaches in this area have been limited in their combining way, generalization and interpretability. To address these limitations, we propose a general bi-level probabilistic graphical reasoning framework called GBPGR. This framework leverages statistical relational learning to effectively integrate deep learning models and symbolic reasoning in a mutually beneficial manner. In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models. At the same time, the deep learning models assist in enhancing the efficiency of the symbolic reasoning process. Through extensive experiments, we demonstrate that our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks

    Using the language of thought

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 125-129).In this thesis, I develop and explore two novel models of how humans might be able to acquire high-level conceputal knowledge by performing probabilistic inference over a language of thought (Fodor 1975) - a space of symbolic and compositional mental representations sufficiently expressive to capture the meanings of human thoughts and utterances. These models and their associated learning algorithms are motivated by an attempt to provide an understanding of the algorithmic principles that might underlie a child's ability to search the haystack of sentences in her language of thought to find the needle that corresponds to any specific concept. The first model takes advantage of the compositionality inherent to LOT representations, framing concept acquisition as program induction in a functional programming language; the Exploration- Compression algorithm this model motivates iteratively builds a library of useful program fragments that, when composed, restructures the search space, making more useful programs shorter and easier to find. The second model, the Infinite Knowledge Base Model (IKM), frames concept learning as probabilistic inference over the space of relational knowledge bases; the algorithm I develop for learning in this model frames this inference problem as a state-space search over abductive proofs of the learner's observed data. This framing allows us to take advantage of powerful techniques from the heuristic search and classical planning literature to guide the learner. In the final part of this thesis, I explore the behavior of the IKM on several case studies of intuitive theories from the concept learning literature, and I discuss evidence for and against it with respect to other approaches to LOT models.by Eyal Dechter.Ph. D
    corecore