11,245 research outputs found
Learning the Structure of Deep Sparse Graphical Models
Deep belief networks are a powerful way to model complex probability
distributions. However, learning the structure of a belief network,
particularly one with hidden units, is difficult. The Indian buffet process has
been used as a nonparametric Bayesian prior on the directed structure of a
belief network with a single infinitely wide hidden layer. In this paper, we
introduce the cascading Indian buffet process (CIBP), which provides a
nonparametric prior on the structure of a layered, directed belief network that
is unbounded in both depth and width, yet allows tractable inference. We use
the CIBP prior with the nonlinear Gaussian belief network so each unit can
additionally vary its behavior between discrete and continuous representations.
We provide Markov chain Monte Carlo algorithms for inference in these belief
networks and explore the structures learned on several image data sets.Comment: 20 pages, 6 figures, AISTATS 2010, Revise
Social optimality in quantum Bayesian games
A significant aspect of the study of quantum strategies is the exploration of
the game-theoretic solution concept of the Nash equilibrium in relation to the
quantization of a game. Pareto optimality is a refinement on the set of Nash
equilibria. A refinement on the set of Pareto optimal outcomes is known as
social optimality in which the sum of players' payoffs are maximized. This
paper analyzes social optimality in a Bayesian game that uses the setting of
generalized Einstein-Podolsky-Rosen experiments for its physical
implementation. We show that for the quantum Bayesian game a direct connection
appears between the violation of Bell's inequality and the social optimal
outcome of the game and that it attains a superior socially optimal outcome.Comment: 12 pages, revise
Bounds on the number of inference functions of a graphical model
Directed and undirected graphical models, also called Bayesian networks and
Markov random fields, respectively, are important statistical tools in a wide
variety of fields, ranging from computational biology to probabilistic
artificial intelligence. We give an upper bound on the number of inference
functions of any graphical model. This bound is polynomial on the size of the
model, for a fixed number of parameters, thus improving the exponential upper
bound given by Pachter and Sturmfels. We also show that our bound is tight up
to a constant factor, by constructing a family of hidden Markov models whose
number of inference functions agrees asymptotically with the upper bound.
Finally, we apply this bound to a model for sequence alignment that is used in
computational biology.Comment: 19 pages, 7 figure
- …