46,340 research outputs found

    Assessment techniques, database design and software facilities for thermodynamics and diffusion

    Get PDF
    The purpose of this article is to give a set of recommendations to producers of assessed thermodynamic data, who may be involved in either the critical evaluation of limited chemical systems or the creation and dissemination of larger thermodynamic databases. Also, it is hoped that reviewers and editors of scientific publications in this field will find some of the information useful. Good practice in the assessment process is essential, particularly as datasets from many different sources may be combined together into a single database. With this in mind, we highlight some problems that can arise during the assessment process and we propose a quality assurance procedure. It is worth mentioning at this point, that the provision of reliable assessed thermodynamic data relies heavily on the availability of high quality experimental information. The different software packages for thermodynamics and diffusion are described here only briefly

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    Higher-order Linear Logic Programming of Categorial Deduction

    Full text link
    We show how categorial deduction can be implemented in higher-order (linear) logic programming, thereby realising parsing as deduction for the associative and non-associative Lambek calculi. This provides a method of solution to the parsing problem of Lambek categorial grammar applicable to a variety of its extensions.Comment: 8 pages LaTeX, uses eaclap.sty, to appear EACL9

    Extending Stan for Deep Probabilistic Programming

    Full text link
    Stan is a popular declarative probabilistic programming language with a high-level syntax for expressing graphical models and beyond. Stan differs by nature from generative probabilistic programming languages like Church, Anglican, or Pyro. This paper presents a comprehensive compilation scheme to compile any Stan model to a generative language and proves its correctness. This sheds a clearer light on the relative expressiveness of different kinds of probabilistic languages and opens the door to combining their mutual strengths. Specifically, we use our compilation scheme to build a compiler from Stan to Pyro and extend Stan with support for explicit variational inference guides and deep probabilistic models. That way, users familiar with Stan get access to new features without having to learn a fundamentally new language. Overall, our paper clarifies the relationship between declarative and generative probabilistic programming languages and is a step towards making deep probabilistic programming easier

    Maximum stellar mass versus cluster membership number revisited

    Full text link
    We have made a new compilation of observations of maximum stellar mass versus cluster membership number from the literature, which we analyse for consistency with the predictions of a simple random drawing hypothesis for stellar mass selection in clusters. Previously, Weidner and Kroupa have suggested that the maximum stellar mass is lower, in low mass clusters, than would be expected on the basis of random drawing, and have pointed out that this could have important implications for steepening the integrated initial mass function of the Galaxy (the IGIMF) at high masses. Our compilation demonstrates how the observed distribution in the plane of maximum stellar mass versus membership number is affected by the method of target selection; in particular, rather low n clusters with large maximum stellar masses are abundant in observational datasets that specifically seek clusters in the environs of high mass stars. Although we do not consider our compilation to be either complete or unbiased, we discuss the method by which such data should be statistically analysed. Our very provisional conclusion is that the data is not indicating any striking deviation from the expectations of random drawing.Comment: 7 pages, 3 Figures; accepted by MNRAS; Reference added
    • …
    corecore