36 research outputs found
Recommended from our members
Gender effects for loss aversion: yes, no, maybe?
Gender effects in risk taking have attracted much attention by economists, and remain debated. Loss aversion—the stylized finding that a given loss carries substantially greater weight than a monetarily equivalent gain—is a fundamental driver of risk aversion. We deploy four definitions of loss aversion commonly used in the literature to investigate gender effects. Even though the definitions only differ in subtle ways, we find women to be more loss averse than men according to one definition, while another definition results in no gender differences, and the remaining two definitions point to women being less loss averse than men. Conceptually, these contradictory effects can be organized by systematic measurement error resulting from model mis-specifications relative to the true underlying decision process
Trieste: Efficiently Exploring The Depths of Black-box Functions with TensorFlow
We present Trieste, an open-source Python package for Bayesian optimization
and active learning benefiting from the scalability and efficiency of
TensorFlow. Our library enables the plug-and-play of popular TensorFlow-based
models within sequential decision-making loops, e.g. Gaussian processes from
GPflow or GPflux, or neural networks from Keras. This modular mindset is
central to the package and extends to our acquisition functions and the
internal dynamics of the decision-making loop, both of which can be tailored
and extended by researchers or engineers when tackling custom use cases.
Trieste is a research-friendly and production-ready toolkit backed by a
comprehensive test suite, extensive documentation, and available at
https://github.com/secondmind-labs/trieste
Project files accompanying the paper "An empirical evaluation of active inference in multi-armed bandits"
Preprint can be found at: https://arxiv.org/abs/2101.0869
The collective dynamics of sequential search in markets for cultural products
The collective dynamics of sequential search in markets for cultural product
Not everything looks like a nail: Learning to select appropriate decision strategies in multiple environments
How do people choose which decision strategy to use? When facing single
tasks, research shows that people can learn to select appropriate
strategies. However, what happens when, as is typical outside the
psychological laboratory, they face multiple tasks? Participants were
presented with two interleaved decision tasks, one from a nonlinear
environment, the other from a linear environment. The environments were
initially unknown and participants had to learn their properties. Through
cognitive modeling, we examined the types of strategies adopted in both
tasks. Based on out of sample predictions, most participants adopted a
cue-based strategy in the linear environment and an exemplar-based strategy
in the nonlinear environment. A context-sensitive reinforcement learning
model accounts for this process. Thus, people associated different
strategies to different types of environments through a trial-and-error
type of process, and learned to flexibly switch between the strategies as
needed. This evidence further supports the strategy selection approach to
decision making which assumes that people pick and apply strategies
available to them according to task demands
No adaptive strategy selection without outcome feedback
<p>This work presents results from a project that aimed to replicate results from Dieckmann & Rieskamp (2007) article involving strategy selection in decision making, and to point out some crucial flaws in their experimental design. It was presented at CogSci 2013 and SPUDM 2013.</p
Recommended from our members
Human behavior in contextual multi-armed bandit problems
In real-life decision environments people learn from their direct
experience with alternative courses of action. Yet they
can accelerate their learning by using functional knowledge
about the features characterizing the alternatives. We designed
a novel contextual multi-armed bandit task where decision
makers chose repeatedly between multiple alternatives characterized
by two informative features. We compared human
behavior in this contextual task with a classic multi-armed
bandit task without feature information. Behavioral analysis
showed that participants in the contextual bandit task used the
feature information to direct their exploration of promising
alternatives. Ex post, we tested participants’ acquired functional
knowledge in one-shot multi-feature choice trilemmas.
We compared a novel function-learning-based reinforcement
learning model to a classic reinforcement learning. Although
reinforcement learning models predicted behavior better in the
learning phase, the new models did better in predicting the
trilemma choice