8,511 research outputs found
Updating beliefs with imperfect signals: experimental evidence
This article analyses belief updating when agents receive a signal that restricts the number of possible states of the world. We create an experiment on individual choice under uncertainty. In this experiment, the subject observes an urn, containing yellow and blue balls, whose composition is partially revealed. The subject has to assess the composition of the urn and form an initial belief. Then, he receives a signal that restricts the set of the possible urns from which the initial observed sample is drawn. Once again, he has to estimate the composition of the urn. Our results show that, on the whole, this type of signal increases the frequency of correct assessment. However, differences appear between validating and invalidating signals (i.e. signals that either confirm or disprove the initial belief). The later significantly increase the probability to make a correct assessment whereas validating signals reduce the frequency of correct estimations. We find evidences of lack of persistence in choice under uncertainty. The literature shows that people may persist with their choice even when they are wrong. We show that they may also change even if they are right.Beliefs; Imperfect Information; Experiment
A Knowledge Gradient Policy for Sequencing Experiments to Identify the Structure of RNA Molecules Using a Sparse Additive Belief Model
We present a sparse knowledge gradient (SpKG) algorithm for adaptively
selecting the targeted regions within a large RNA molecule to identify which
regions are most amenable to interactions with other molecules. Experimentally,
such regions can be inferred from fluorescence measurements obtained by binding
a complementary probe with fluorescence markers to the targeted regions. We use
a biophysical model which shows that the fluorescence ratio under the log scale
has a sparse linear relationship with the coefficients describing the
accessibility of each nucleotide, since not all sites are accessible (due to
the folding of the molecule). The SpKG algorithm uniquely combines the Bayesian
ranking and selection problem with the frequentist regularized
regression approach Lasso. We use this algorithm to identify the sparsity
pattern of the linear model as well as sequentially decide the best regions to
test before experimental budget is exhausted. Besides, we also develop two
other new algorithms: batch SpKG algorithm, which generates more suggestions
sequentially to run parallel experiments; and batch SpKG with a procedure which
we call length mutagenesis. It dynamically adds in new alternatives, in the
form of types of probes, are created by inserting, deleting or mutating
nucleotides within existing probes. In simulation, we demonstrate these
algorithms on the Group I intron (a mid-size RNA molecule), showing that they
efficiently learn the correct sparsity pattern, identify the most accessible
region, and outperform several other policies
Fast calibrated additive quantile regression
We propose a novel framework for fitting additive quantile regression models,
which provides well calibrated inference about the conditional quantiles and
fast automatic estimation of the smoothing parameters, for model structures as
diverse as those usable with distributional GAMs, while maintaining equivalent
numerical efficiency and stability. The proposed methods are at once
statistically rigorous and computationally efficient, because they are based on
the general belief updating framework of Bissiri et al. (2016) to loss based
inference, but compute by adapting the stable fitting methods of Wood et al.
(2016). We show how the pinball loss is statistically suboptimal relative to a
novel smooth generalisation, which also gives access to fast estimation
methods. Further, we provide a novel calibration method for efficiently
selecting the 'learning rate' balancing the loss with the smoothing priors
during inference, thereby obtaining reliable quantile uncertainty estimates.
Our work was motivated by a probabilistic electricity load forecasting
application, used here to demonstrate the proposed approach. The methods
described here are implemented by the qgam R package, available on the
Comprehensive R Archive Network (CRAN)
Information Aggregation in Exponential Family Markets
We consider the design of prediction market mechanisms known as automated
market makers. We show that we can design these mechanisms via the mold of
\emph{exponential family distributions}, a popular and well-studied probability
distribution template used in statistics. We give a full development of this
relationship and explore a range of benefits. We draw connections between the
information aggregation of market prices and the belief aggregation of learning
agents that rely on exponential family distributions. We develop a very natural
analysis of the market behavior as well as the price equilibrium under the
assumption that the traders exhibit risk aversion according to exponential
utility. We also consider similar aspects under alternative models, such as
when traders are budget constrained
Policy Recognition in the Abstract Hidden Markov Model
In this paper, we present a method for recognising an agent's behaviour in
dynamic, noisy, uncertain domains, and across multiple levels of abstraction.
We term this problem on-line plan recognition under uncertainty and view it
generally as probabilistic inference on the stochastic process representing the
execution of the agent's plan. Our contributions in this paper are twofold. In
terms of probabilistic inference, we introduce the Abstract Hidden Markov Model
(AHMM), a novel type of stochastic processes, provide its dynamic Bayesian
network (DBN) structure and analyse the properties of this network. We then
describe an application of the Rao-Blackwellised Particle Filter to the AHMM
which allows us to construct an efficient, hybrid inference method for this
model. In terms of plan recognition, we propose a novel plan recognition
framework based on the AHMM as the plan execution model. The Rao-Blackwellised
hybrid inference for AHMM can take advantage of the independence properties
inherent in a model of plan execution, leading to an algorithm for online
probabilistic plan recognition that scales well with the number of levels in
the plan hierarchy. This illustrates that while stochastic models for plan
execution can be complex, they exhibit special structures which, if exploited,
can lead to efficient plan recognition algorithms. We demonstrate the
usefulness of the AHMM framework via a behaviour recognition system in a
complex spatial environment using distributed video surveillance data
- …