28,501 research outputs found
Bayesian Decision Theory and Stochastic Independence
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory
Bayesian Decision Theory and Stochastic Independence
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savageâs postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework
Progress on Intelligent Guidance and Control for Wind Shear Encounter
Low altitude wind shear poses a serious threat to air safety. Avoiding severe wind shear challenges the ability of flight crews, as it involves assessing risk from uncertain evidence. A computerized intelligent cockpit aid can increase flight crew awareness of wind shear, improving avoidance decisions. The primary functions of a cockpit advisory expert system for wind shear avoidance are discussed. Also introduced are computational techniques being implemented to enable these primary functions
Adaptive Probability Theory: Human Biases as an Adaptation
Humans make mistakes in our decision-making and probability judgments. While the heuristics used for decision-making have been explained as adaptations that are both efficient and fast, the reasons why people deal with probabilities using the reported biases have not been clear. We will see that some of these biases can be understood as heuristics developed to explain a complex world when little information is available. That is, they approximate Bayesian inferences for situations more complex than the ones in laboratory experiments and in this sense might have appeared as an adaptation to those situations. When ideas as uncertainty and limited sample sizes are included in the problem, the correct probabilities are changed to values close to the observed behavior. These ideas will be used to explain the observed weight functions, the violations of coalescing and stochastic dominance reported in the literature
Labeled Directed Acyclic Graphs: a generalization of context-specific independence in directed graphical models
We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.Comment: 26 pages, 17 figure
Expected utility without utility
This paper advances an interpretation of Von NeumannâMorgensternâs expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered.expected utility, cardinal utility, benchmark, risk attitude, stochastic dominance
Herding and Social Pressure in Trading Tasks: A Behavioural Analysis
We extend the experimental literature on Bayesian herding using evidence from a financial decision-making experiment. We identify significant propensities to herd increasing with the degree of herd-consensus. We test various herding models to capture the differential impacts of Bayesian-style thinking versus behavioural factors. We find statistically significant associations between herding and individual characteristics such as age and personality traits. Overall, our evidence is consistent with explanations of herding as the outcome of social and behavioural factors. Suggestions for further research are outlined and include verifying these findings and identifying the neurological correlates of propensities to herd
Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging
In this paper we present an optimization-based view of distributed parameter
estimation and observational social learning in networks. Agents receive a
sequence of random, independent and identically distributed (i.i.d.) signals,
each of which individually may not be informative about the underlying true
state, but the signals together are globally informative enough to make the
true state identifiable. Using an optimization-based characterization of
Bayesian learning as proximal stochastic gradient descent (with
Kullback-Leibler divergence from a prior as a proximal function), we show how
to efficiently use a distributed, online variant of Nesterov's dual averaging
method to solve the estimation with purely local information. When the true
state is globally identifiable, and the network is connected, we prove that
agents eventually learn the true parameter using a randomized gossip scheme. We
demonstrate that with high probability the convergence is exponentially fast
with a rate dependent on the KL divergence of observations under the true state
from observations under the second likeliest state. Furthermore, our work also
highlights the possibility of learning under continuous adaptation of network
which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201
- âŠ