5,488 research outputs found
Reified Context Models
A classic tension exists between exact inference in a simple model and
approximate inference in a complex model. The latter offers expressivity and
thus accuracy, but the former provides coverage of the space, an important
property for confidence estimation and learning with indirect supervision. In
this work, we introduce a new approach, reified context models, to reconcile
this tension. Specifically, we let the amount of context (the arity of the
factors in a graphical model) be chosen "at run-time" by reifying it---that is,
letting this choice itself be a random variable inside the model. Empirically,
we show that our approach obtains expressivity and coverage on three natural
language tasks
Quantum Graphical Models and Belief Propagation
Belief Propagation algorithms acting on Graphical Models of classical
probability distributions, such as Markov Networks, Factor Graphs and Bayesian
Networks, are amongst the most powerful known methods for deriving
probabilistic inferences amongst large numbers of random variables. This paper
presents a generalization of these concepts and methods to the quantum case,
based on the idea that quantum theory can be thought of as a noncommutative,
operator-valued, generalization of classical probability theory. Some novel
characterizations of quantum conditional independence are derived, and
definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and
Bayesian Networks are proposed. The structure of Quantum Markov Networks is
investigated and some partial characterization results are obtained, along the
lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation
algorithm is presented and is shown to converge on 1-Bifactor Networks and
Markov Networks when the underlying graph is a tree. The use of Quantum Belief
Propagation as a heuristic algorithm in cases where it is not known to converge
is discussed. Applications to decoding quantum error correcting codes and to
the simulation of many-body quantum systems are described.Comment: 58 pages, 9 figure
Curvature and Concentration of Hamiltonian Monte Carlo in High Dimensions
In this article, we analyze Hamiltonian Monte Carlo (HMC) by placing it in
the setting of Riemannian geometry using the Jacobi metric, so that each step
corresponds to a geodesic on a suitable Riemannian manifold. We then combine
the notion of curvature of a Markov chain due to Joulin and Ollivier with the
classical sectional curvature from Riemannian geometry to derive error bounds
for HMC in important cases, where we have positive curvature. These cases
include several classical distributions such as multivariate Gaussians, and
also distributions arising in the study of Bayesian image registration. The
theoretical development suggests the sectional curvature as a new diagnostic
tool for convergence for certain Markov chains.Comment: Comments welcom
Bayesian Reinforcement Learning via Deep, Sparse Sampling
We address the problem of Bayesian reinforcement learning using efficient
model-based online planning. We propose an optimism-free Bayes-adaptive
algorithm to induce deeper and sparser exploration with a theoretical bound on
its performance relative to the Bayes optimal policy, with a lower
computational complexity. The main novelty is the use of a candidate policy
generator, to generate long-term options in the planning tree (over beliefs),
which allows us to create much sparser and deeper trees. Experimental results
on different environments show that in comparison to the state-of-the-art, our
algorithm is both computationally more efficient, and obtains significantly
higher reward in discrete environments.Comment: Published in AISTATS 202
- …