542 research outputs found

    Practical probabilistic programming with monads

    Get PDF
    The machine learning community has recently shown a lot of interest in practical probabilistic programming systems that target the problem of Bayesian inference. Such systems come in different forms, but they all express probabilistic models as computational processes using syntax resembling programming languages. In the functional programming community monads are known to offer a convenient and elegant abstraction for programming with probability distributions, but their use is often limited to very simple inference problems. We show that it is possible to use the monad abstraction to construct probabilistic models for machine learning, while still offering good performance of inference in challenging models. We use a GADT as an underlying representation of a probability distribution and apply Sequential Monte Carlo-based methods to achieve efficient inference. We define a formal semantics via measure theory. We demonstrate a clean and elegant implementation that achieves performance comparable with Anglican, a state-of-the-art probabilistic programming system.The first author is supported by EPSRC and the Cambridge Trust.This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/2804302.280431

    Layer by layer - Combining Monads

    Full text link
    We develop a method to incrementally construct programming languages. Our approach is categorical: each layer of the language is described as a monad. Our method either (i) concretely builds a distributive law between two monads, i.e. layers of the language, which then provides a monad structure to the composition of layers, or (ii) identifies precisely the algebraic obstacles to the existence of a distributive law and gives a best approximant language. The running example will involve three layers: a basic imperative language enriched first by adding non-determinism and then probabilistic choice. The first extension works seamlessly, but the second encounters an obstacle, which results in a best approximant language structurally very similar to the probabilistic network specification language ProbNetKAT

    The Functional Perspective on Advanced Logic Programming

    Get PDF
    The basics of logic programming, as embodied by Prolog, are generally well-known in the programming language community. However, more advanced techniques, such as tabling, answer subsumption and probabilistic logic programming fail to attract the attention of a larger audience. The cause for the community\u27s seemingly limited interest lies with the presentation of these features: the literature frequently focuses on implementations and examples that do little to aid the understanding of non-experts in the field. The key point is that many of these advanced logic programming features can be characterised in more generally known, more accessible terms. In my research I try to reconcile these advanced concepts from logic programming (Tabling, Answer subsumption and probabilistic programming) with concepts from functional programming (effects, monads and applicative functors)

    Abstract Hidden Markov Models: a monadic account of quantitative information flow

    Full text link
    Hidden Markov Models, HMM's, are mathematical models of Markov processes with state that is hidden, but from which information can leak. They are typically represented as 3-way joint-probability distributions. We use HMM's as denotations of probabilistic hidden-state sequential programs: for that, we recast them as `abstract' HMM's, computations in the Giry monad D\mathbb{D}, and we equip them with a partial order of increasing security. However to encode the monadic type with hiding over some state X\mathcal{X} we use DXD2X\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X} rather than the conventional XDX\mathcal{X}{\to}\mathbb{D}\mathcal{X} that suffices for Markov models whose state is not hidden. We illustrate the DXD2X\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X} construction with a small Haskell prototype. We then present uncertainty measures as a generalisation of the extant diversity of probabilistic entropies, with characteristic analytic properties for them, and show how the new entropies interact with the order of increasing security. Furthermore, we give a `backwards' uncertainty-transformer semantics for HMM's that is dual to the `forwards' abstract HMM's - it is an analogue of the duality between forwards, relational semantics and backwards, predicate-transformer semantics for imperative programs with demonic choice. Finally, we argue that, from this new denotational-semantic viewpoint, one can see that the Dalenius desideratum for statistical databases is actually an issue in compositionality. We propose a means for taking it into account

    Polymonadic Programming

    Full text link
    Monads are a popular tool for the working functional programmer to structure effectful computations. This paper presents polymonads, a generalization of monads. Polymonads give the familiar monadic bind the more general type forall a,b. L a -> (a -> M b) -> N b, to compose computations with three different kinds of effects, rather than just one. Polymonads subsume monads and parameterized monads, and can express other constructions, including precise type-and-effect systems and information flow tracking; more generally, polymonads correspond to Tate's productoid semantic model. We show how to equip a core language (called lambda-PM) with syntactic support for programming with polymonads. Type inference and elaboration in lambda-PM allows programmers to write polymonadic code directly in an ML-like syntax--our algorithms compute principal types and produce elaborated programs wherein the binds appear explicitly. Furthermore, we prove that the elaboration is coherent: no matter which (type-correct) binds are chosen, the elaborated program's semantics will be the same. Pleasingly, the inferred types are easy to read: the polymonad laws justify (sometimes dramatic) simplifications, but with no effect on a type's generality.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    A semantical approach to equilibria and rationality

    Full text link
    Game theoretic equilibria are mathematical expressions of rationality. Rational agents are used to model not only humans and their software representatives, but also organisms, populations, species and genes, interacting with each other and with the environment. Rational behaviors are achieved not only through conscious reasoning, but also through spontaneous stabilization at equilibrium points. Formal theories of rationality are usually guided by informal intuitions, which are acquired by observing some concrete economic, biological, or network processes. Treating such processes as instances of computation, we reconstruct and refine some basic notions of equilibrium and rationality from the some basic structures of computation. It is, of course, well known that equilibria arise as fixed points; the point is that semantics of computation of fixed points seems to be providing novel methods, algebraic and coalgebraic, for reasoning about them.Comment: 18 pages; Proceedings of CALCO 200

    Linear Time Logics - A Coalgebraic Perspective

    Full text link
    We describe a general approach to deriving linear time logics for a wide variety of state-based, quantitative systems, by modelling the latter as coalgebras whose type incorporates both branching behaviour and linear behaviour. Concretely, we define logics whose syntax is determined by the choice of linear behaviour and whose domain of truth values is determined by the choice of branching, and we provide two equivalent semantics for them: a step-wise semantics amenable to automata-based verification, and a path-based semantics akin to those of standard linear time logics. We also provide a semantic characterisation of the associated notion of logical equivalence, and relate it to previously-defined maximal trace semantics for such systems. Instances of our logics support reasoning about the possibility, likelihood or minimal cost of exhibiting a given linear time property. We conclude with a generalisation of the logics, dual in spirit to logics with discounting, which increases their practical appeal in the context of resource-aware computation by incorporating a notion of offsetting.Comment: Major revision of previous version: Sections 4 and 5 generalise the results in the previous version, with new proofs; Section 6 contains new result

    Selective applicative functors & probabilistic programming

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringIn functional programming, selective applicative functors (SAF) are an abstraction between applicative functors and monads. This abstraction requires all effects to be statically declared, but provides a way to select which effects to execute dynamically. SAF have been shown to be a useful abstraction in several examples, including two industrial case studies. Selective functors have been used for their static analysis capabilities. The collection of information about all possible effects in a computation and the fact that they enable speculative execution make it possible to take advantage to describe probabilistic computations instead of using monads. In particular, selective functors appear to provide a way to obtain a more efficient implementation of probability distributions than monads. This dissertation addresses a probabilistic interpretation for the arrow and selective abstractions in the light of the linear algebra of programming discipline, as well as exploring ways of offering SAF capabilities to probabilistic programming, by exposing sampling as a concurrency problem. As a result, provides a Haskell type-safe matrix library capable of expressing probability distributions and probabilistic computations as typed matrices, and a probabilistic programming eDSL that explores various techniques in order to offer a novel, performant solution to probabilistic functional programming.Em programação funcional, os functores aplicativos seletivos (FAS) são uma abstração entre functores aplicativos e monades. Essa abstração requer que todos os efeitos sejam declarados estaticamente, mas fornece uma maneira de selecionar quais efeitos serão executados dinamicamente. FAS têm se mostrado uma abstração útil em vários exemplos, incluindo dois estudos de caso industriais. Functores seletivos têm sido usados pela suas capacidade de análise estática. O conjunto de informações sobre todos os efeitos possíveis numa computação e o facto de que eles permitem a execução especulativa tornam possível descrever computações probabilísticas. Em particular, functores seletivos parecem oferecer uma maneira de obter uma implementação mais eficiente de distribuições probabilisticas do que monades. Esta dissertação aborda uma interpretação probabilística para as abstrações Arrow e Selective à luz da disciplina da álgebra linear da programação, bem como explora formas de oferecer as capacidades dos FAS para programação probabilística, expondo sampling como um problema de concorrência. Como resultado, fornece uma biblioteca de matrizes em Haskell, capaz de expressar distribuições de probabilidade e cálculos probabilísticos como matrizes tipadas e uma eDSL de programação probabilística que explora várias técnicas, com o obejtivo de oferecer uma solução inovadora e de alto desempenho para a programação funcional probabilística

    Logical Relations for Monadic Types

    Full text link
    Logical relations and their generalizations are a fundamental tool in proving properties of lambda-calculi, e.g., yielding sound principles for observational equivalence. We propose a natural notion of logical relations able to deal with the monadic types of Moggi's computational lambda-calculus. The treatment is categorical, and is based on notions of subsconing, mono factorization systems, and monad morphisms. Our approach has a number of interesting applications, including cases for lambda-calculi with non-determinism (where being in logical relation means being bisimilar), dynamic name creation, and probabilistic systems.Comment: 83 page
    corecore