2,528 research outputs found

    Mostly Harmless Simulations? Using Monte Carlo Studies for Estimator Selection

    Get PDF
    We consider two recent suggestions for how to perform an empirically motivated Monte Carlo study to help select a treatment effect estimator under unconfoundedness. We show theoretically that neither is likely to be informative except under restrictive conditions that are unlikely to be satisfied in many contexts. To test empirical relevance, we also apply the approaches to a real-world setting where estimator performance is known. Both approaches are worse than random at selecting estimators which minimise absolute bias. They are better when selecting estimators that minimise mean squared error. However, using a simple bootstrap is at least as good and often better. For now researchers would be best advised to use a range of estimators and compare estimates for robustness

    Econometrics for Learning Agents

    Full text link
    The main goal of this paper is to develop a theory of inference of player valuations from observed data in the generalized second price auction without relying on the Nash equilibrium assumption. Existing work in Economics on inferring agent values from data relies on the assumption that all participant strategies are best responses of the observed play of other players, i.e. they constitute a Nash equilibrium. In this paper, we show how to perform inference relying on a weaker assumption instead: assuming that players are using some form of no-regret learning. Learning outcomes emerged in recent years as an attractive alternative to Nash equilibrium in analyzing game outcomes, modeling players who haven't reached a stable equilibrium, but rather use algorithmic learning, aiming to learn the best way to play from previous observations. In this paper we show how to infer values of players who use algorithmic learning strategies. Such inference is an important first step before we move to testing any learning theoretic behavioral model on auction data. We apply our techniques to a dataset from Microsoft's sponsored search ad auction system

    Simple versus optimal rules as guides to policy

    Get PDF
    This paper contributes to the policy evaluation literature by developing new strategies to study alternative policy rules. We compare optimal rules to simple rules within canonical monetary policy models. In our context, an optimal rule represents the solution to an intertemporal optimization problem in which a loss function for the policymaker and an explicit model of the macroeconomy are specified. We define a simple rule to be a summary of the intuition policymakers and economists have about how a central bank should react to aggregate disturbances. The policy rules are evaluated under minimax and minimax regret criteria. These criteria force the policymaker to guard against a worst-case scenario, but in different ways. Minimax makes the worst possible model the benchmark for the policymaker, while minimax regret confronts the policymaker with uncertainty about the true model. Our results indicate that the case for a model-specific optimal rule can break down when uncertainty exists about which of several models is true. Further, we show that the assumption that the policymaker’s loss function is known can obscure policy trade-offs that exist in the short, medium, and long run. Thus, policy evaluation is more difficult once it is recognized that model and preference uncertainty can interact.

    Measuring Precision of Statistical Inference on Partially Identified Parameters

    Get PDF
    Planners of surveys and experiments that partially identify parameters of interest face trade offs between using limited resources to reduce sampling error and using them to reduce the extent of partial identification. I evaluate these trade offs in a simple statistical problem with normally distributed sample data and interval partial identification using different frequentist measures of inference precision (length of confidence intervals, minimax mean sqaured error and mean absolute deviation, minimax regret for treatment choice) and analogous Bayes measures with a flat prior. The relative value of collecting data with better identification properties (e.g., increasing response rates in surveys) depends crucially on the choice of the measure of precision. When the extent of partial identification is significant in comparison to sampling error, the length of confidence intervals, which has been used most often, assigns the lowest value to improving identification among the measures considered.statistical treatment choice; survey planning; nonresponse; mean squared error; mean absolute deviation; minimax regret

    A "Quantal Regret" Method for Structural Econometrics in Repeated Games

    Full text link
    We suggest a general method for inferring players' values from their actions in repeated games. The method extends and improves upon the recent suggestion of (Nekipelov et al., EC 2015) and is based on the assumption that players are more likely to exhibit sequences of actions that have lower regret. We evaluate this "quantal regret" method on two different datasets from experiments of repeated games with controlled player values: those of (Selten and Chmura, AER 2008) on a variety of two-player 2x2 games and our own experiment on ad-auctions (Noti et al., WWW 2014). We find that the quantal regret method is consistently and significantly more precise than either "classic" econometric methods that are based on Nash equilibria, or the "min-regret" method of (Nekipelov et al., EC 2015)
    • …
    corecore