18 research outputs found
Generating and Sampling Orbits for Lifted Probabilistic Inference
A key goal in the design of probabilistic inference algorithms is identifying
and exploiting properties of the distribution that make inference tractable.
Lifted inference algorithms identify symmetry as a property that enables
efficient inference and seek to scale with the degree of symmetry of a
probability model. A limitation of existing exact lifted inference techniques
is that they do not apply to non-relational representations like factor graphs.
In this work we provide the first example of an exact lifted inference
algorithm for arbitrary discrete factor graphs. In addition we describe a
lifted Markov-Chain Monte-Carlo algorithm that provably mixes rapidly in the
degree of symmetry of the distribution
Probabilistic Program Abstractions
Abstraction is a fundamental tool for reasoning about complex systems.
Program abstraction has been utilized to great effect for analyzing
deterministic programs. At the heart of program abstraction is the relationship
between a concrete program, which is difficult to analyze, and an abstract
program, which is more tractable. Program abstractions, however, are typically
not probabilistic. We generalize non-deterministic program abstractions to
probabilistic program abstractions by explicitly quantifying the
non-deterministic choices. Our framework upgrades key definitions and
properties of abstractions to the probabilistic context. We also discuss
preliminary ideas for performing inference on probabilistic abstractions and
general probabilistic programs
Symbolic Exact Inference for Discrete Probabilistic Programs
The computational burden of probabilistic inference remains a hurdle for
applying probabilistic programming languages to practical problems of interest.
In this work, we provide a semantic and algorithmic foundation for efficient
exact inference on discrete-valued finite-domain imperative probabilistic
programs. We leverage and generalize efficient inference procedures for
Bayesian networks, which exploit the structure of the network to decompose the
inference task, thereby avoiding full path enumeration. To do this, we first
compile probabilistic programs to a symbolic representation. Then we adapt
techniques from the probabilistic logic programming and artificial intelligence
communities in order to perform inference on the symbolic representation. We
formalize our approach, prove it sound, and experimentally validate it against
existing exact and approximate inference techniques. We show that our inference
approach is competitive with inference procedures specialized for Bayesian
networks, thereby expanding the class of probabilistic programs that can be
practically analyzed
Logical Abstractions for Noisy Variational Quantum Algorithm Simulation
Due to the unreliability and limited capacity of existing quantum computer
prototypes, quantum circuit simulation continues to be a vital tool for
validating next generation quantum computers and for studying variational
quantum algorithms, which are among the leading candidates for useful quantum
computation. Existing quantum circuit simulators do not address the common
traits of variational algorithms, namely: 1) their ability to work with noisy
qubits and operations, 2) their repeated execution of the same circuits but
with different parameters, and 3) the fact that they sample from circuit final
wavefunctions to drive a classical optimization routine. We present a quantum
circuit simulation toolchain based on logical abstractions targeted for
simulating variational algorithms. Our proposed toolchain encodes quantum
amplitudes and noise probabilities in a probabilistic graphical model, and it
compiles the circuits to logical formulas that support efficient repeated
simulation of and sampling from quantum circuits for different parameters.
Compared to state-of-the-art state vector and density matrix quantum circuit
simulators, our simulation approach offers greater performance when sampling
from noisy circuits with at least eight to 20 qubits and with around 12
operations on each qubit, making the approach ideal for simulating near-term
variational quantum algorithms. And for simulating noise-free shallow quantum
circuits with 32 qubits, our simulation approach offers a reduction
in sampling cost versus quantum circuit simulation techniques based on tensor
network contraction.Comment: ASPLOS '21, April 19-23, 2021, Virtual, US
Type Prediction With Program Decomposition and Fill-in-the-Type Training
TypeScript and Python are two programming languages that support optional
type annotations, which are useful but tedious to introduce and maintain. This
has motivated automated type prediction: given an untyped program, produce a
well-typed output program. Large language models (LLMs) are promising for type
prediction, but there are challenges: fill-in-the-middle performs poorly,
programs may not fit into the context window, generated types may not type
check, and it is difficult to measure how well-typed the output program is. We
address these challenges by building OpenTau, a search-based approach for type
prediction that leverages large language models. We propose a new metric for
type prediction quality, give a tree-based program decomposition that searches
a space of generated types, and present fill-in-the-type fine-tuning for LLMs.
We evaluate our work with a new dataset for TypeScript type prediction, and
show that 47.4% of files type check (14.5% absolute improvement) with an
overall rate of 3.3 type errors per file. All code, data, and models are
available at: https://github.com/GammaTauAI/opentau
Model Checking Finite-Horizon Markov Chains with Probabilistic Inference
We revisit the symbolic verification of Markov chains with respect to finite
horizon reachability properties. The prevalent approach iteratively computes
step-bounded state reachability probabilities. By contrast, recent advances in
probabilistic inference suggest symbolically representing all horizon-length
paths through the Markov chain. We ask whether this perspective advances the
state-of-the-art in probabilistic model checking. First, we formally describe
both approaches in order to highlight their key differences. Then, using these
insights we develop Rubicon, a tool that transpiles Prism models to the
probabilistic inference tool Dice. Finally, we demonstrate better scalability
compared to probabilistic model checkers on selected benchmarks. All together,
our results suggest that probabilistic inference is a valuable addition to the
probabilistic model checking portfolio -- with Rubicon as a first step towards
integrating both perspectives.Comment: Technical Report. Accepted at CAV 202
Recommended from our members
Exploiting Program Structure for Scaling Probabilistic Programming
Probabilistic modeling and reasoning are central tasks in artificial intelligence and machine learning. A probabilistic model is a rough description of the world: the model-builder attempts to capture as much detail about the world’s complexities as she can, and when no more detail can be given the rest is left as probabilistic uncertainty. Once constructed, the goal of a model is to perform automated inference: compute the probability that some particular fact is true about the world. It is natural for the model-builder to want a flexible expressive language – the world is a complex thing to describe – and over time this has led to a trend of increasingly powerful modeling languages. This trend is taken to its apex by probabilistic programming languages (PPLs), which enable modelers to specify probabilistic models using the facilities of a full programming language. However, this expressivity comes at a cost: the computational cost of inference is in direct tension with the flexibility of the modeling language, and so it becomes increasingly difficult to design automated inference algorithms that scale to the kinds of systems that model builders want to create.This thesis focuses on the central question: how can we design effective probabilistic programming languages that profitably trade-off expressivity and tractability for inference? The approach taken here is first to identify and exploit important structure that a probabilistic program may possess. The kinds of structure considered here are discrete program structure and symmetry. Programs are heterogeneous objects, so different parts of programs may exhibit different kinds of structure; in the second part of the thesis I show how to decompose heterogeneous probabilistic program inference using a notion of program abstraction. These contributions enable new applications of probabilistic programs in domains such as text analysis, verification of probabilistic systems, and classical simulation of quantum algorithms
Recommended from our members
Exploiting Program Structure for Scaling Probabilistic Programming
Probabilistic modeling and reasoning are central tasks in artificial intelligence and machine learning. A probabilistic model is a rough description of the world: the model-builder attempts to capture as much detail about the world’s complexities as she can, and when no more detail can be given the rest is left as probabilistic uncertainty. Once constructed, the goal of a model is to perform automated inference: compute the probability that some particular fact is true about the world. It is natural for the model-builder to want a flexible expressive language – the world is a complex thing to describe – and over time this has led to a trend of increasingly powerful modeling languages. This trend is taken to its apex by probabilistic programming languages (PPLs), which enable modelers to specify probabilistic models using the facilities of a full programming language. However, this expressivity comes at a cost: the computational cost of inference is in direct tension with the flexibility of the modeling language, and so it becomes increasingly difficult to design automated inference algorithms that scale to the kinds of systems that model builders want to create.This thesis focuses on the central question: how can we design effective probabilistic programming languages that profitably trade-off expressivity and tractability for inference? The approach taken here is first to identify and exploit important structure that a probabilistic program may possess. The kinds of structure considered here are discrete program structure and symmetry. Programs are heterogeneous objects, so different parts of programs may exhibit different kinds of structure; in the second part of the thesis I show how to decompose heterogeneous probabilistic program inference using a notion of program abstraction. These contributions enable new applications of probabilistic programs in domains such as text analysis, verification of probabilistic systems, and classical simulation of quantum algorithms
Probabilistic Program Abstractions
Abstraction is a fundamental tool for reasoning about a complex system. Program abstraction has been utilized to great effect for analyzing deterministic programs. At the heart of a program abstraction is a connection between the abstract program, which is simple to analyze, and the concrete program, which may be extremely complex. Program abstractions, however, are typically not probabilistic. In this thesis I generalize a particular class of non- deterministic program abstractions known as sound over-approximations to the probabilistic context. Sound over-approximations are a family of abstract programs which are guaranteed to contain the original program as a subset of their behavior. This thesis shows that when imbued with a probabilistic semantics, sound over-approximations define a family of probabilistic programs which capture key properties of the original program. It then introduces a mechanism for generating sound probabilistic over-approximations as a generalization of a well-known program abstraction technique known as predicate abstraction. Finally, the problem of inference and learning in the context of probabilistic program abstractions are briefly described