9,453 research outputs found
Randomized Algorithms for the Loop Cutset Problem
We show how to find a minimum weight loop cutset in a Bayesian network with
high probability. Finding such a loop cutset is the first step in the method of
conditioning for inference. Our randomized algorithm for finding a loop cutset
outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least
1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is
the minimal size of a minimum weight loop cutset, and n is the number of
vertices. We also show empirically that a variant of this algorithm often finds
a loop cutset that is closer to the minimum weight loop cutset than the ones
found by the best deterministic algorithms known
A Comparison of Algorithms for Learning Hidden Variables in Normal Graphs
A Bayesian factor graph reduced to normal form consists in the
interconnection of diverter units (or equal constraint units) and
Single-Input/Single-Output (SISO) blocks. In this framework localized
adaptation rules are explicitly derived from a constrained maximum likelihood
(ML) formulation and from a minimum KL-divergence criterion using KKT
conditions. The learning algorithms are compared with two other updating
equations based on a Viterbi-like and on a variational approximation
respectively. The performance of the various algorithm is verified on synthetic
data sets for various architectures. The objective of this paper is to provide
the programmer with explicit algorithms for rapid deployment of Bayesian graphs
in the applications.Comment: Submitted for journal publicatio
Technology Adoption in Poorly Specified Environments
This article extends the characteristics-based choice framework of technology adoption to account for decisions taken by boundedly-rational individuals in environments where traits are not fully observed. It is applied to an agricultural setting and introduces the concept of ambiguity in the agricultural technology adoption literature by relaxing strict informational and cognition related assumptions that are implied by traditional Bayesian analysis. The main results confirm that ambiguity increases as local conditions become less homogeneous and as computational ability, own experience and nearby adoption rates decrease. Measurement biases associated with full rationality assumptions are found to increase when decision makers have low computational ability, low experience and when their farming conditions differ widely from average adopter ones. A complementary empirical paper (Useche 2006) finds that models assuming low confidence in observed data, ambiguity and pessimistic expectations about traits predict sample shares better than models which assume that farmers do not face ambiguity or are optimistic about the traits of new varieties.Research and Development/Tech Change/Emerging Technologies,
Beliefs in Decision-Making Cascades
This work explores a social learning problem with agents having nonidentical
noise variances and mismatched beliefs. We consider an -agent binary
hypothesis test in which each agent sequentially makes a decision based not
only on a private observation, but also on preceding agents' decisions. In
addition, the agents have their own beliefs instead of the true prior, and have
nonidentical noise variances in the private signal. We focus on the Bayes risk
of the last agent, where preceding agents are selfish.
We first derive the optimal decision rule by recursive belief update and
conclude, counterintuitively, that beliefs deviating from the true prior could
be optimal in this setting. The effect of nonidentical noise levels in the
two-agent case is also considered and analytical properties of the optimal
belief curves are given. Next, we consider a predecessor selection problem
wherein the subsequent agent of a certain belief chooses a predecessor from a
set of candidates with varying beliefs. We characterize the decision region for
choosing such a predecessor and argue that a subsequent agent with beliefs
varying from the true prior often ends up selecting a suboptimal predecessor,
indicating the need for a social planner. Lastly, we discuss an augmented
intelligence design problem that uses a model of human behavior from cumulative
prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin
Cutset Sampling for Bayesian Networks
The paper presents a new sampling methodology for Bayesian networks that
samples only a subset of variables and applies exact inference to the rest.
Cutset sampling is a network structure-exploiting application of the
Rao-Blackwellisation principle to sampling in Bayesian networks. It improves
convergence by exploiting memory-based inference algorithms. It can also be
viewed as an anytime approximation of the exact cutset-conditioning algorithm
developed by Pearl. Cutset sampling can be implemented efficiently when the
sampled variables constitute a loop-cutset of the Bayesian network and, more
generally, when the induced width of the networks graph conditioned on the
observed sampled variables is bounded by a constant w. We demonstrate
empirically the benefit of this scheme on a range of benchmarks
- …