7 research outputs found
Maximum a Posteriori Estimation by Search in Probabilistic Programs
We introduce an approximate search algorithm for fast maximum a posteriori
probability estimation in probabilistic programs, which we call Bayesian ascent
Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with
varying number of mutually dependent finite, countable, and continuous random
variables. BaMC is an anytime MAP search algorithm applicable to any
combination of random variables and dependencies. We compare BaMC to other MAP
estimation algorithms and show that BaMC is faster and more robust on a range
of probabilistic models.Comment: To appear in proceedings of SOCS1
Thompson sampling guided stochastic searching on the line for deceptive environments with applications to root-finding problems
publishedVersio
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
Universal probabilistic programming systems (PPSs) provide a powerful
framework for specifying rich probabilistic models. They further attempt to
automate the process of drawing inferences from these models, but doing this
successfully is severely hampered by the wide range of non--standard models
they can express. As a result, although one can specify complex models in a
universal PPS, the provided inference engines often fall far short of what is
required. In particular, we show that they produce surprisingly unsatisfactory
performance for models where the support varies between executions, often doing
no better than importance sampling from the prior. To address this, we
introduce a new inference framework: Divide, Conquer, and Combine, which
remains efficient for such models, and show how it can be implemented as an
automated and generic PPS inference engine. We empirically demonstrate
substantial performance improvements over existing approaches on three
examples.Comment: Published at the 37th International Conference on Machine Learning
(ICML 2020
Towards Thompson Sampling for Complex Bayesian Reasoning
Paper III, IV, and VI are not available as a part of the dissertation due to the copyright.Thompson Sampling (TS) is a state-of-art algorithm for bandit problems set in a Bayesian framework. Both the theoretical foundation and the empirical efficiency of TS is wellexplored for plain bandit problems. However, the Bayesian underpinning of TS means that TS could potentially be applied to other, more complex, problems as well, beyond the bandit problem, if suitable Bayesian structures can be found.
The objective of this thesis is the development and analysis of TS-based schemes for more complex optimization problems, founded on Bayesian reasoning. We address several complex optimization problems where the previous state-of-art relies on a relatively myopic perspective on the problem. These includes stochastic searching on the line, the Goore game, the knapsack problem, travel time estimation, and equipartitioning. Instead of employing Bayesian reasoning to obtain a solution, they rely on carefully engineered rules. In all brevity, we recast each of these optimization problems in a Bayesian framework, introducing dedicated TS based solution schemes. For all of the addressed problems, the results show that besides being more effective, the TS based approaches we introduce are also capable of solving more adverse versions of the problems, such as dealing with stochastic liars.publishedVersio
Maximum a Posteriori Estimation by Search in Probabilistic Programs
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models