6,623 research outputs found
Extending Stan for Deep Probabilistic Programming
Stan is a popular declarative probabilistic programming language with a
high-level syntax for expressing graphical models and beyond. Stan differs by
nature from generative probabilistic programming languages like Church,
Anglican, or Pyro. This paper presents a comprehensive compilation scheme to
compile any Stan model to a generative language and proves its correctness.
This sheds a clearer light on the relative expressiveness of different kinds of
probabilistic languages and opens the door to combining their mutual strengths.
Specifically, we use our compilation scheme to build a compiler from Stan to
Pyro and extend Stan with support for explicit variational inference guides and
deep probabilistic models. That way, users familiar with Stan get access to new
features without having to learn a fundamentally new language. Overall, our
paper clarifies the relationship between declarative and generative
probabilistic programming languages and is a step towards making deep
probabilistic programming easier
Ultrasound Nerve Segmentation Using Deep Probabilistic Programming
Deep probabilistic programming concatenates the strengths of deep learning to the context of probabilistic modeling for efficient and flexible computation in practice. Being an evolving field, there exist only a few expressive programming languages for uncertainty management. This paper discusses an application for analysis of ultrasound nerve segmentation-based biomedical images. Our method uses the probabilistic programming language Edward with the U-Net model and generative adversarial networks under different optimizers. The segmentation process showed the least Dice loss ("‘0.54) and the highest accuracy (0.99) with the Adam optimizer in the U-Net model with the least time consumption compared to other optimizers. The smallest amount of generative network loss in the generative adversarial network model gained was 0.69 for the Adam optimizer. The Dice loss, accuracy, time consumption and output image quality in the results show the applicability of deep probabilistic programming in the long run. Thus, we further propose a neuroscience decision support system based on the proposed approach
Evaluating probabilistic programming languages for simulating quantum correlations
This article explores how probabilistic programming can be used to simulate
quantum correlations in an EPR experimental setting. Probabilistic programs are
based on standard probability which cannot produce quantum correlations. In
order to address this limitation, a hypergraph formalism was programmed which
both expresses the measurement contexts of the EPR experimental design as well
as associated constraints. Four contemporary open source probabilistic
programming frameworks were used to simulate an EPR experiment in order to shed
light on their relative effectiveness from both qualitative and quantitative
dimensions. We found that all four probabilistic languages successfully
simulated quantum correlations. Detailed analysis revealed that no language was
clearly superior across all dimensions, however, the comparison does highlight
aspects that can be considered when using probabilistic programs to simulate
experiments in quantum physics.Comment: 24 pages, 8 figures, code is available at
https://github.com/askoj/bell-ppl
Tea: A High-level Language and Runtime System for Automating Statistical Analysis
Though statistical analyses are centered on research questions and
hypotheses, current statistical analysis tools are not. Users must first
translate their hypotheses into specific statistical tests and then perform API
calls with functions and parameters. To do so accurately requires that users
have statistical expertise. To lower this barrier to valid, replicable
statistical analysis, we introduce Tea, a high-level declarative language and
runtime system. In Tea, users express their study design, any parametric
assumptions, and their hypotheses. Tea compiles these high-level specifications
into a constraint satisfaction problem that determines the set of valid
statistical tests, and then executes them to test the hypothesis. We evaluate
Tea using a suite of statistical analyses drawn from popular tutorials. We show
that Tea generally matches the choices of experts while automatically switching
to non-parametric tests when parametric assumptions are not met. We simulate
the effect of mistakes made by non-expert users and show that Tea automatically
avoids both false negatives and false positives that could be produced by the
application of incorrect statistical tests.Comment: 11 page
Bayesian Optimization for Probabilistic Programs
We present the first general purpose framework for marginal maximum a
posteriori estimation of probabilistic program variables. By using a series of
code transformations, the evidence of any probabilistic program, and therefore
of any graphical model, can be optimized with respect to an arbitrary subset of
its sampled variables. To carry out this optimization, we develop the first
Bayesian optimization package to directly exploit the source code of its
target, leading to innovations in problem-independent hyperpriors, unbounded
optimization, and implicit constraint satisfaction; delivering significant
performance improvements over prominent existing packages. We present
applications of our method to a number of tasks including engineering design
and parameter optimization
- …