215 research outputs found

    Identification region of the potential outcome distributions under instrument independence

    Get PDF
    This paper examines identification power of the instrument exogeneity assumption in the treatment effect model. We derive the identification region: The set of potential outcome distributions that are compatible with data and the model restriction. The model restrictions whose identifying power is investigated are (i)instrument independence of each of the potential outcome (marginal independence), (ii) instrument joint independence of the potential outcomes and the selection heterogeneity, and (iii) instrument monotonicity in addition to (ii) (the LATE restriction of Imbens and Angrist (1994)), where these restrictions become stronger in the order of listing. By comparing the size of the identification region under each restriction, we show that the joint independence restriction can provide further identifying information for the potential outcome distributions than marginal independence, but the LATE restriction never does since it solely constrains the distribution of data. We also derive the tightest possible bounds for the average treatment effects under each restriction. Our analysis covers both the discrete and continuous outcome case, and extends the treatment effect bounds of Balke and Pearl(1997) that are available only for the binary outcome case to a wider range of settings including the continuous outcome case.

    Instrumental Variables Before and LATEr

    Get PDF
    The modern formulation of the instrumental variable methods initiated the valuable interactions between economics and statistics literatures of causal inference and fueled new innovations of the idea. It helped resolving the long-standing confusion that the statisticians used to have on the method, and encouraged the economists to rethink how to make use of instrumental variables in policy analysis. [arXiv:1410.0163]Comment: Published in at http://dx.doi.org/10.1214/14-STS494 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Mostly Harmless Simulations? Using Monte Carlo Studies for Estimator Selection

    Get PDF
    We consider two recent suggestions for how to perform an empirically motivated Monte Carlo study to help select a treatment effect estimator under unconfoundedness. We show theoretically that neither is likely to be informative except under restrictive conditions that are unlikely to be satisfied in many contexts. To test empirical relevance, we also apply the approaches to a real-world setting where estimator performance is known. Both approaches are worse than random at selecting estimators which minimise absolute bias. They are better when selecting estimators that minimise mean squared error. However, using a simple bootstrap is at least as good and often better. For now researchers would be best advised to use a range of estimators and compare estimates for robustness

    Testing Instrument Validity with Covariates

    Full text link
    We develop a novel specification test of the instrumental variable identifying assumptions (instrument validity) for heterogeneous treatment effect models with conditioning covariates. Building on the common empirical settings of local average treatment effect and marginal treatment effect analysis, we assume semiparametric dependence between the potential outcomes and conditioning covariates, and show that this allows us to express the testable implications of instrument validity in terms of equality and inequality restrictions among the subdensities of estimable partial residuals. We propose jointly testing these restrictions. To improve power of the test, we propose distillation, a process designed to reduce the sample down to the information useful for detecting violations of the instrument validity inequalities. We perform Monte Carlo exercises to demonstrate the gain in power from testing restrictions jointly and distillation. We apply our test procedure to the college proximity instrument of Card1 (1993), the same-sex instrument of Angrist and Evans (1998), the school leaving age instrument of Oreopoulos (2006), and the mean land gradient instrument of Dinkelman (2011). We find that the null of instrument validity conditional on covariates cannot be rejected for Card (1993) and Dinkelman (2011), but it can be rejected at the 10% level of significance for Angrist and Evans (1998) for some levels of a tuning parameter, and it is rejected at all conventional levels of significance in the case of Oreopoulos (2006)

    Who Should Get Vaccinated? Individualized Allocation of Vaccines Over SIR Network

    Full text link
    How to allocate vaccines over heterogeneous individuals is one of the important policy decisions in pandemic times. This paper develops a procedure to estimate an individualized vaccine allocation policy under limited supply, exploiting social network data containing individual demographic characteristics and health status. We model spillover effects of the vaccines based on a Heterogeneous-Interacted-SIR network model and estimate an individualized vaccine allocation policy by maximizing an estimated social welfare (public health) criterion incorporating the spillovers. While this optimization problem is generally an NP-hard integer optimization problem, we show that the SIR structure leads to a submodular objective function, and provide a computationally attractive greedy algorithm for approximating a solution that has theoretical performance guarantee. Moreover, we characterise a finite sample welfare regret bound and examine how its uniform convergence rate depends on the complexity and riskiness of social network. In the simulation, we illustrate the importance of considering spillovers by comparing our method with targeting without network information

    von Mises-Fisher distributions and their statistical divergence

    Full text link
    The von Mises-Fisher family is a parametric family of distributions on the surface of the unit ball, summarised by a concentration parameter and a mean direction. As a quasi-Bayesian prior, the von Mises-Fisher distribution is a convenient and parsimonious choice when parameter spaces are isomorphic to the hypersphere (e.g., maximum score estimation in semi-parametric discrete choice, estimation of single-index treatment assignment rules via empirical welfare maximisation, under-identifying linear simultaneous equation models). Despite a long history of application, measures of statistical divergence have not been analytically characterised for von Mises-Fisher distributions. This paper provides analytical expressions for the ff-divergence of a von Mises-Fisher distribution from another, distinct, von Mises-Fisher distribution in Rp\mathbb{R}^p and the uniform distribution over the hypersphere. This paper also collect several other results pertaining to the von Mises-Fisher family of distributions, and characterises the limiting behaviour of the measures of divergence that we consider.Comment: 28 pages, 2 figure

    Individualized Treatment Allocation in Sequential Network Games

    Full text link
    Designing individualized allocation of treatments so as to maximize the equilibrium welfare of interacting agents has many policy-relevant applications. Focusing on sequential decision games of interacting agents, this paper develops a method to obtain optimal treatment assignment rules that maximize a social welfare criterion by evaluating stationary distributions of outcomes. Stationary distributions in sequential decision games are given by Gibbs distributions, which are difficult to optimize with respect to a treatment allocation due to analytical and computational complexity. We apply a variational approximation to the stationary distribution and optimize the approximated equilibrium welfare with respect to treatment allocation using a greedy optimization algorithm. We characterize the performance of the variational approximation, deriving a performance guarantee for the greedy optimization algorithm via a welfare regret bound. We establish the convergence rate of this bound. We demonstrate the performance of our proposed method in simulation exercises

    Mostly harmless simulations? Using Monte Carlo studies for estimator selection

    Get PDF
    We consider two recent suggestions for how to perform an empirically motivated Monte Carlo study to help select a treatment effect estimator under unconfoundedness. We show theoretically that neither is likely to be informative except under restrictive conditions that are unlikely to be satisfied in many contexts. To test empirical relevance, we also apply the approaches to a real-world setting where estimator performance is known. Both approaches are worse than random at selecting estimators which minimise absolute bias. They are better when selecting estimators that minimise mean squared error. However, using a simple bootstrap is at least as good and often better. For now researchers would be best advised to use a range of estimators and compare estimates for robustness

    Treatment Choice, Mean Square Regret and Partial Identification

    Full text link
    We consider a decision maker who faces a binary treatment choice when their welfare is only partially identified from data. We contribute to the literature by anchoring our finite-sample analysis on mean square regret, a decision criterion advocated by Kitagawa, Lee, and Qiu (2022). We find that optimal rules are always fractional, irrespective of the width of the identified set and precision of its estimate. The optimal treatment fraction is a simple logistic transformation of the commonly used t-statistic multiplied by a factor calculated by a simple constrained optimization. This treatment fraction gets closer to 0.5 as the width of the identified set becomes wider, implying the decision maker becomes more cautious against the adversarial Nature
    • …
    corecore