159 research outputs found

    Hashing-Based Approximate Probabilistic Inference in Hybrid Domains

    Get PDF
    In recent years, there has been considerable progress on fast randomized algorithms that ap-proximate probabilistic inference with tight toler-ance and confidence guarantees. The idea here is to formulate inference as a counting task over an annotated propositional theory, called weighted model counting (WMC), which can be parti-tioned into smaller tasks using universal hashing. An inherent limitation of this approach, how-ever, is that it only admits the inference of dis-crete probability distributions. In this work, we consider the problem of approximating inference tasks for a probability distribution defined over discrete and continuous random variables. Build-ing on a notion called weighted model integra-tion, which is a strict generalization of WMC and is based on annotating Boolean and arithmetic constraints, we show how probabilistic inference in hybrid domains can be put within reach of hashing-based WMC solvers. Empirical evalu-ations demonstrate the applicability and promise of the proposal.

    Causal Abstraction with Soft Interventions

    Full text link
    Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail. Existing theoretical proposals limit the analysis of abstract models to "hard" interventions fixing causal variables to be constant values. In this work, we extend causal abstraction to "soft" interventions, which assign possibly non-constant functions to variables without adding new causal connections. Specifically, (i) we generalize τ\tau-abstraction from Beckers and Halpern (2019) to soft interventions, (ii) we propose a further definition of soft abstraction to ensure a unique map ω\omega between soft interventions, and (iii) we prove that our constructive definition of soft abstraction guarantees the intervention map ω\omega has a specific and necessary explicit form

    Learning Likelihoods with Conditional Normalizing Flows

    Full text link
    Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula. Such behavior is desirable in multivariate structured prediction tasks, where handcrafted per-pixel loss-based methods inadequately capture strong correlations between output dimensions. We present a study of conditional normalizing flows (CNFs), a class of NFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x). CNFs are efficient in sampling and inference, they can be trained with a likelihood-based objective, and CNFs, being generative flows, do not suffer from mode collapse or training instabilities. We provide an effective method to train continuous CNFs for binary problems and in particular, we apply these CNFs to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics.Comment: 18 pages, 8 Tables, 9 Figures, Preprin
    • …
    corecore