1,042 research outputs found

    On Surprise, Change, and the Effect of Recent Outcomes

    Get PDF
    The leading models of human and animal learning rest on the assumption that individuals tend to select the alternatives that led to the best recent outcomes. The current research highlights three boundaries of this “recency” assumption. Analysis of the stock market and simple laboratory experiments suggests that positively surprising obtained payoffs, and negatively surprising forgone payoffs reduce the rate of repeating the previous choice. In addition, all previous trails outcomes, except the latest outcome (most recent), have similar effect on future choices. We show that these results, and other robust properties of decisions from experience, can be captured with a simple addition to the leading models: the assumption that surprise triggers change

    UNDER-DIVERSIFICATION AND THE ROLE OF BEST REPLY TO PATTERN

    Get PDF
    Three experiments are presented that compare alternative explanations to the coexistence of risk aversion and under-diversification in investment decisions. The participants were asked to select one of several assets under two feedback conditions. In each case, one asset was a weighted combination of the other assets, allowing for lower volatility. The frequency of choice of the composite asset was highly sensitive to feedback condition. The composite asset was the least popular asset when the feedback included information concerning forgone payoffs, and increased in frequency when the feedback was limited to the obtained payoff. These results support the assertion that under-diversification can be a product of learning from feedback and in particular best reply to pattern.Risk; Diversification; Learning

    Self-tuning experience weighted attraction learning in games

    Get PDF
    Self-tuning experience weighted attraction (EWA) is a one-parameter theory of learning in games. It addresses a criticism that an earlier model (EWA) has too many parameters, by fixing some parameters at plausible values and replacing others with functions of experience so that they no longer need to be estimated. Consequently, it is econometrically simpler than the popular weighted fictitious play and reinforcement learning models. The functions of experience which replace free parameters “self-tune” over time, adjusting in a way that selects a sensible learning rule to capture subjects’ choice dynamics. For instance, the self-tuning EWA model can turn from a weighted fictitious play into an averaging reinforcement learning as subjects equilibrate and learn to ignore inferior foregone payoffs. The theory was tested on seven different games, and compared to the earlier parametric EWA model and a one-parameter stochastic equilibrium theory (QRE). Self-tuning EWA does as well as EWA in predicting behavior in new games, even though it has fewer parameters, and fits reliably better than the QRE equilibrium benchmark

    Learning and Communication in Sender-Receiver Games: An Econometric Investigation

    Get PDF
    Learning and communication play important roles in coordinating activities. Game theory and experiments have made a significant contribution to our understanding and appreciation for the issues surrounding learning and communication in coordination. However, the results of past experimental studies provide conflicting results about the performance of learning models. Moreover, the interaction between learning and communication has not been systematically investigated. Our long run objective is to overcome the conflicting results and to provide a better understanding of the interaction. To this end, we econometrically investigate a sender-receiver game environment where communication is necessary for coordination and learning is essential for communication.

    Learning and Communication in Sender-Reciever Games: An Economic Investigation

    Get PDF
    This paper compares the performance of stimulus response (SR) and belief-based learning (BBL) using data from game theory experiments. The environment, extensive form games played in a population setting, is novel in the empirical literature on learning in games. Both the SR and BBL models fit the data reasonably well in common interest games with history while the test results accept SR and reject BBL in games with no history and in all but one of the divergent interest games. Estimation is challenging since the likelihood function is not globally concave and the results may be subject to convergence bias.econometrics;game theory and experiments

    On the descriptive value of loss aversion in decisions under risk: Six clarifications

    Get PDF
    Previous studies of loss aversion in decisions under risk have led to mixed results. Losses appear to loom larger than gains in some settings, but not in others. The current paper clarifies these results by highlighting six experimental manipulations that tend to increase the likelihood of the behavior predicted by loss aversion. These manipulations include: (1) framing of the safe alternative as the status quo; (2) ensuring that the choice pattern predicted by loss aversion maximizes the probability of positive (rather than zero or negative) outcomes; (3) the use of high nominal (numerical) payoffs; (4) the use of high stakes; (5) the inclusion of highly attractive risky prospects that creates a contrast effect; and (6) the use of long experiments in which no feedback is provided and in which the computation of the expected values is difficult. In addition, the results suggest the possibility of learning in the absence of feedback: The tendency to select simple strategies, like “maximize the worst outcome” which implies “loss aversion”, increases when this behavior is not costly. Theoretical and practical implications are discussed

    Comparing policy gradient and value function based reinforcement learning methods in simulated electrical power trade

    Get PDF
    In electrical power engineering, reinforcement learning algorithms can be used to model the strategies of electricity market participants. However, traditional value function based reinforcement learning algorithms suffer from convergence issues when used with value function approximators. Function approximation is required in this domain to capture the characteristics of the complex and continuous multivariate problem space. The contribution of this paper is the comparison of policy gradient reinforcement learning methods, using artificial neural networks for policy function approximation, with traditional value function based methods in simulations of electricity trade. The methods are compared using an AC optimal power flow based power exchange auction market model and a reference electric power system model
    corecore