2,481 research outputs found
Probabilistic Inference Modulo Theories
We present SGDPLL(T), an algorithm that solves (among many other problems)
probabilistic inference modulo theories, that is, inference problems over
probabilistic models defined via a logic theory provided as a parameter
(currently, propositional, equalities on discrete sorts, and inequalities, more
specifically difference arithmetic, on bounded integers). While many solutions
to probabilistic inference over logic representations have been proposed,
SGDPLL(T) is simultaneously (1) lifted, (2) exact and (3) modulo theories, that
is, parameterized by a background logic theory. This offers a foundation for
extending it to rich logic languages such as data structures and relational
data. By lifted, we mean algorithms with constant complexity in the domain size
(the number of values that variables can take). We also detail a solver for
summations with difference arithmetic and show experimental results from a
scenario in which SGDPLL(T) is much faster than a state-of-the-art
probabilistic solver.Comment: Submitted to StarAI-16 workshop as closely revised version of
IJCAI-16 pape
Exact Inference for Relational Graphical Models with Interpreted Functions: Lifted Probabilistic Inference Modulo Theories
Probabilistic Inference Modulo Theories (PIMT) is a recent framework that
expands exact inference on graphical models to use richer languages that
include arithmetic, equalities, and inequalities on both integers and real
numbers. In this paper, we expand PIMT to a lifted version that also processes
random functions and relations. This enhancement is achieved by adapting
Inversion, a method from Lifted First-Order Probabilistic Inference literature,
to also be modulo theories. This results in the first algorithm for exact
probabilistic inference that efficiently and simultaneously exploits random
relations and functions, arithmetic, equalities and inequalities.Comment: Appeared in the Uncertainty in Artificial Intelligence Conference,
August 201
Structured Learning Modulo Theories
Modelling problems containing a mixture of Boolean and numerical variables is
a long-standing interest of Artificial Intelligence. However, performing
inference and learning in hybrid domains is a particularly daunting task. The
ability to model this kind of domains is crucial in "learning to design" tasks,
that is, learning applications where the goal is to learn from examples how to
perform automatic {\em de novo} design of novel objects. In this paper we
present Structured Learning Modulo Theories, a max-margin approach for learning
in hybrid domains based on Satisfiability Modulo Theories, which allows to
combine Boolean reasoning and optimization over continuous linear arithmetical
constraints. The main idea is to leverage a state-of-the-art generalized
Satisfiability Modulo Theory solver for implementing the inference and
separation oracles of Structured Output SVMs. We validate our method on
artificial and real world scenarios.Comment: 46 pages, 11 figures, submitted to Artificial Intelligence Journal
Special Issue on Combining Constraint Solving with Mining and Learnin
Hybrid SRL with Optimization Modulo Theories
Generally speaking, the goal of constructive learning could be seen as, given
an example set of structured objects, to generate novel objects with similar
properties. From a statistical-relational learning (SRL) viewpoint, the task
can be interpreted as a constraint satisfaction problem, i.e. the generated
objects must obey a set of soft constraints, whose weights are estimated from
the data. Traditional SRL approaches rely on (finite) First-Order Logic (FOL)
as a description language, and on MAX-SAT solvers to perform inference. Alas,
FOL is unsuited for con- structive problems where the objects contain a mixture
of Boolean and numerical variables. It is in fact difficult to implement, e.g.
linear arithmetic constraints within the language of FOL. In this paper we
propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo
Theories, an alternative class of for- mal languages that allow to describe,
and reason over, mixed Boolean-numerical objects and constraints. The resulting
methods, which we call Learning Mod- ulo Theories, are formulated within the
structured output SVM framework, and employ a weighted SMT solver as an
optimization oracle to perform efficient in- ference and discriminative max
margin weight learning. We also present a few examples of constructive learning
applications enabled by our method
Hybrid Probabilistic Inference with Logical Constraints: Tractability and Message Passing
Weighted model integration (WMI) is a very appealing framework for
probabilistic inference: it allows to express the complex dependencies of
real-world hybrid scenarios where variables are heterogeneous in nature (both
continuous and discrete) via the language of Satisfiability Modulo Theories
(SMT); as well as computing probabilistic queries with arbitrarily complex
logical constraints. Recent work has shown WMI inference to be reducible to a
model integration (MI) problem, under some assumptions, thus effectively
allowing hybrid probabilistic reasoning by volume computations. In this paper,
we introduce a novel formulation of MI via a message passing scheme that allows
to efficiently compute the marginal densities and statistical moments of all
the variables in linear time. As such, we are able to amortize inference for
arbitrarily rich MI queries when they conform to the problem structure, here
represented as the primal graph associated to the SMT formula. Furthermore, we
theoretically trace the tractability boundaries of exact MI. Indeed, we prove
that in terms of the structural requirements on the primal graph that make our
MI algorithm tractable - bounding its diameter and treewidth - the bounds are
not only sufficient, but necessary for tractable inference via MI
Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic Constraints via Message Passing
Weighted model integration (WMI) is a very appealing framework for
probabilistic inference: it allows to express the complex dependencies of
real-world problems where variables are both continuous and discrete, via the
language of Satisfiability Modulo Theories (SMT), as well as to compute
probabilistic queries with complex logical and arithmetic constraints. Yet,
existing WMI solvers are not ready to scale to these problems. They either
ignore the intrinsic dependency structure of the problem at all, or they are
limited to too restrictive structures. To narrow this gap, we derive a
factorized formalism of WMI enabling us to devise a scalable WMI solver based
on message passing, MP-WMI. Namely, MP-WMI is the first WMI solver which allows
to: 1) perform exact inference on the full class of tree-structured WMI
problems; 2) compute all marginal densities in linear time; 3) amortize
inference inter query. Experimental results show that our solver dramatically
outperforms the existing WMI solvers on a large set of benchmarks
Constrained Sampling and Counting: Universal Hashing Meets SAT Solving
Constrained sampling and counting are two fundamental problems in artificial
intelligence with a diverse range of applications, spanning probabilistic
reasoning and planning to constrained-random verification. While the theory of
these problems was thoroughly investigated in the 1980s, prior work either did
not scale to industrial size instances or gave up correctness guarantees to
achieve scalability. Recently, we proposed a novel approach that combines
universal hashing and SAT solving and scales to formulas with hundreds of
thousands of variables without giving up correctness guarantees. This paper
provides an overview of the key ingredients of the approach and discusses
challenges that need to be overcome to handle larger real-world instances.Comment: Appears in proceedings of AAAI-16 Workshop on Beyond N
Anytime Exact Belief Propagation
Statistical Relational Models and, more recently, Probabilistic Programming,
have been making strides towards an integration of logic and probabilistic
reasoning. A natural expectation for this project is that a probabilistic logic
reasoning algorithm reduces to a logic reasoning algorithm when provided a
model that only involves 0-1 probabilities, exhibiting all the advantages of
logic reasoning such as short-circuiting, intelligibility, and the ability to
provide proof trees for a query answer. In fact, we can take this further and
require that these characteristics be present even for probabilistic models
with probabilities \emph{near} 0 and 1, with graceful degradation as the model
becomes more uncertain. We also seek inference that has amortized constant time
complexity on a model's size (even if still exponential in the induced width of
a more directly relevant portion of it) so that it can be applied to huge
knowledge bases of which only a relatively small portion is relevant to typical
queries. We believe that, among the probabilistic reasoning algorithms, Belief
Propagation is the most similar to logic reasoning: messages are propagated
among neighboring variables, and the paths of message-passing are similar to
proof trees. However, Belief Propagation is either only applicable to tree
models, or approximate (and without guarantees) for precision and convergence.
In this paper we present work in progress on an Anytime Exact Belief
Propagation algorithm that is very similar to Belief Propagation but is exact
even for graphical models with cycles, while exhibiting soft short-circuiting,
amortized constant time complexity in the model size, and which can provide
probabilistic proof trees.Comment: Submission to StaRAI-17 workshop at UAI-17 conferenc
Trace Abstraction Modulo Probability
We propose trace abstraction modulo probability, a proof technique for
verifying high-probability accuracy guarantees of probabilistic programs. Our
proofs overapproximate the set of program traces using failure automata,
finite-state automata that upper bound the probability of failing to satisfy a
target specification. We automate proof construction by reducing probabilistic
reasoning to logical reasoning: we use program synthesis methods to select
axioms for sampling instructions, and then apply Craig interpolation to prove
that traces fail the target specification with only a small probability. Our
method handles programs with unknown inputs, parameterized distributions,
infinite state spaces, and parameterized specifications. We evaluate our
technique on a range of randomized algorithms drawn from the differential
privacy literature and beyond. To our knowledge, our approach is the first to
automatically establish accuracy properties of these algorithms
Using Quantum Computers to Learn Physics
Since its inception at the beginning of the twentieth century, quantum
mechanics has challenged our conceptions of how the universe ought to work;
however, the equations of quantum mechanics can be too computationally
difficult to solve using existing computers for even modestly large systems.
Here I will show that quantum computers can sometimes be used to address such
problems and that quantum computer science can assign formal complexities to
learning facts about nature. Hence, computer science should not only be
regarded as an applied science; it is also of central importance to the
foundations of science.Comment: This article is designed as a popular article aimed at a general
computer science audience and mostly reviews existing results, but it does
contain several new results involving Hamiltonian inferenc
- …