53 research outputs found
Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs
We introduce a dynamic mechanism for the solution of analytically-tractable
substructure in probabilistic programs, using conjugate priors and affine
transformations to reduce variance in Monte Carlo estimators. For inference
with Sequential Monte Carlo, this automatically yields improvements such as
locally-optimal proposals and Rao-Blackwellization. The mechanism maintains a
directed graph alongside the running program that evolves dynamically as
operations are triggered upon it. Nodes of the graph represent random
variables, edges the analytically-tractable relationships between them. Random
variables remain in the graph for as long as possible, to be sampled only when
they are used by the program in a way that cannot be resolved analytically. In
the meantime, they are conditioned on as many observations as possible. We
demonstrate the mechanism with a few pedagogical examples, as well as a
linear-nonlinear state-space model with simulated data, and an epidemiological
model with real data of a dengue outbreak in Micronesia. In all cases one or
more variables are automatically marginalized out to significantly reduce
variance in estimates of the marginal likelihood, in the final case
facilitating a random-weight or pseudo-marginal-type importance sampler for
parameter estimation. We have implemented the approach in Anglican and a new
probabilistic programming language called Birch.Comment: 13 pages, 4 figure
Verifying Performance Properties of Probabilistic Inference
In this extended abstract, we discuss the opportunity to formally verify that
inference systems for probabilistic programming guarantee good performance. In
particular, we focus on hybrid inference systems that combine exact and
approximate inference to try to exploit the advantages of each. Their
performance depends critically on a) the division between exact and approximate
inference, and b) the computational resources consumed by exact inference.
We describe several projects in this direction. Semi-symbolic Inference (SSI)
is a type of hybrid inference system that provides limited guarantees by
construction on the exact/approximate division. In addition to these limited
guarantees, we also describe ongoing work to extend guarantees to a more
complex class of programs, requiring a program analysis to ensure the
guarantees. Finally, we also describe work on verifying that inference systems
using delayed sampling -- another type of hybrid inference -- execute in
bounded memory. Together, these projects show that verification can deliver the
performance guarantees that probabilistic programming languages need
Ultrasound Nerve Segmentation Using Deep Probabilistic Programming
Deep probabilistic programming concatenates the strengths of deep learning to the context of probabilistic modeling for efficient and flexible computation in practice. Being an evolving field, there exist only a few expressive programming languages for uncertainty management. This paper discusses an application for analysis of ultrasound nerve segmentation-based biomedical images. Our method uses the probabilistic programming language Edward with the U-Net model and generative adversarial networks under different optimizers. The segmentation process showed the least Dice loss ("‘0.54) and the highest accuracy (0.99) with the Adam optimizer in the U-Net model with the least time consumption compared to other optimizers. The smallest amount of generative network loss in the generative adversarial network model gained was 0.69 for the Adam optimizer. The Dice loss, accuracy, time consumption and output image quality in the results show the applicability of deep probabilistic programming in the long run. Thus, we further propose a neuroscience decision support system based on the proposed approach
- …