21,028 research outputs found
Parameter estimation in softmax decision-making models with linear objective functions
With an eye towards human-centered automation, we contribute to the
development of a systematic means to infer features of human decision-making
from behavioral data. Motivated by the common use of softmax selection in
models of human decision-making, we study the maximum likelihood parameter
estimation problem for softmax decision-making models with linear objective
functions. We present conditions under which the likelihood function is convex.
These allow us to provide sufficient conditions for convergence of the
resulting maximum likelihood estimator and to construct its asymptotic
distribution. In the case of models with nonlinear objective functions, we show
how the estimator can be applied by linearizing about a nominal parameter
value. We apply the estimator to fit the stochastic UCL (Upper Credible Limit)
model of human decision-making to human subject data. We show statistically
significant differences in behavior across related, but distinct, tasks.Comment: In pres
On Universal Prediction and Bayesian Confirmation
The Bayesian framework is a well-studied and successful framework for
inductive reasoning, which includes hypothesis testing and confirmation,
parameter estimation, sequence prediction, classification, and regression. But
standard statistical guidelines for choosing the model class and prior are not
always available or fail, in particular in complex situations. Solomonoff
completed the Bayesian framework by providing a rigorous, unique, formal, and
universal choice for the model class and the prior. We discuss in breadth how
and in which sense universal (non-i.i.d.) sequence prediction solves various
(philosophical) problems of traditional Bayesian sequence prediction. We show
that Solomonoff's model possesses many desirable properties: Strong total and
weak instantaneous bounds, and in contrast to most classical continuous prior
densities has no zero p(oste)rior problem, i.e. can confirm universal
hypotheses, is reparametrization and regrouping invariant, and avoids the
old-evidence and updating problem. It even performs well (actually better) in
non-computable environments.Comment: 24 page
- …