13,203 research outputs found
Maximum a Posteriori Estimation by Search in Probabilistic Programs
We introduce an approximate search algorithm for fast maximum a posteriori
probability estimation in probabilistic programs, which we call Bayesian ascent
Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with
varying number of mutually dependent finite, countable, and continuous random
variables. BaMC is an anytime MAP search algorithm applicable to any
combination of random variables and dependencies. We compare BaMC to other MAP
estimation algorithms and show that BaMC is faster and more robust on a range
of probabilistic models.Comment: To appear in proceedings of SOCS1
Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting
Weighted model counting (WMC) has emerged as a prevalent approach for
probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF
counting (weighted #DNF) is a special case, where approximations with
probabilistic guarantees are obtained in O(nm), where n denotes the number of
variables, and m the number of clauses of the input DNF, but this is not
scalable in practice. In this paper, we propose a neural model counting
approach for weighted #DNF that combines approximate model counting with deep
learning, and accurately approximates model counts in linear time when width is
bounded. We conduct experiments to validate our method, and show that our model
learns and generalizes very well to large-scale #DNF instances.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI-20). Code and data available at:
https://github.com/ralphabb/NeuralDNF
An approximate empirical Bayesian method for large-scale linear-Gaussian inverse problems
We study Bayesian inference methods for solving linear inverse problems,
focusing on hierarchical formulations where the prior or the likelihood
function depend on unspecified hyperparameters. In practice, these
hyperparameters are often determined via an empirical Bayesian method that
maximizes the marginal likelihood function, i.e., the probability density of
the data conditional on the hyperparameters. Evaluating the marginal
likelihood, however, is computationally challenging for large-scale problems.
In this work, we present a method to approximately evaluate marginal likelihood
functions, based on a low-rank approximation of the update from the prior
covariance to the posterior covariance. We show that this approximation is
optimal in a minimax sense. Moreover, we provide an efficient algorithm to
implement the proposed method, based on a combination of the randomized SVD and
a spectral approximation method to compute square roots of the prior covariance
matrix. Several numerical examples demonstrate good performance of the proposed
method
- …