1,072 research outputs found

    Measuring Employment and Income for Low-Income Populations with Administrative and Survey Data

    Get PDF
    We discuss the strengths and weaknesses of income and employment data in national surveys, in unemployment insurance (UI) wage records, and in tax returns. The CPS, SIPP, NLS, and PSID surveys provide valuable information on the behavior of the low-income population. They have broad and fairly accurate measures of income for national samples, and their focus on families as the unit of analysis and their ease of access greatly enhance their value. The value of these data sets for evaluating welfare reform is severely limited, however. With the devolution of responsibility for TANF, the CPS and SIPP sampling frames and sample sizes mean that, at best, they can be only supplementary data sources for understanding the effects of welfare reform at the state and local levels. The apparent decline in program coverage in the CPS is also worrisome. UI data are available at the state level and can be matched to individuals in existing samples at relatively low cost. It is straightforward to do follow-up analyses on income and employment for workers who remain in the state, and UI data are timely. However, earnings are available only for individuals, while changes in family composition upon exit from welfare have been shown to have a large bearing on economic well-being. UI data do not allow us to track these changes. There also appears to be a substantial problem with some workers being classified as independent contractors and hence not entering the UI system. Overall gaps in coverage appear to be at least 13 percent and may be significantly higher. Even when wages are reported, there is some evidence that they are understated by a significant amount. We also present evidence on the degree to which tax data can be used to understand the incomes and employment of low-skilled workers. The paper concludes with brief recommendations for future research that might help fill some of the gaps we have identified.

    Dealing with Limited Overlap in Estimation of Average Treatment Effects

    Get PDF
    Estimation of average treatment effects under unconfounded or ignorable treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1, 0.9].Average Treatment Effects, Causality, Unconfoundness, Overlap, Treatment Effect Heterogeneity

    Moving the Goalposts: Addressing Limited Overlap in Estimation of Average Treatment Effects by Changing the Estimand

    Get PDF
    Estimation of average treatment effects under unconfoundedness or exogenous treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing such lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1, 0.9].average treatment effects, causality, unconfoundedness, overlap, treatment effect heterogeneity

    Nonparametric Tests for Treatment Effect Heterogeneity

    Get PDF
    A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.average treatment effects, causality, unconfoundedness, treatment effect heterogeneity

    Moving the Goalposts: Addressing Limited Overlap in the Estimation of Average Treatment Effects by Changing the Estimand

    Get PDF
    Estimation of average treatment effects under unconfoundedness or exogenous treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing such lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1,0.9].

    Nonparametric Tests for Treatment Effect Heterogeneity

    Get PDF
    A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.

    How to coordinate Vaccination and Social Distancing to mitigate SARS-CoV-2 Outbreaks

    Get PDF

    A Search for Scalar Chameleons with ADMX

    Get PDF
    Scalar fields with a "chameleon" property, in which the effective particle mass is a function of its local environment, are common to many theories beyond the standard model and could be responsible for dark energy. If these fields couple weakly to the photon, they could be detectable through the "afterglow" effect of photon-chameleon-photon transitions. The ADMX experiment was used in the first chameleon search with a microwave cavity to set a new limit on scalar chameleon-photon coupling excluding values between 2*10^9 and 5*10^14 for effective chameleon masses between 1.9510 and 1.9525 micro-eV.Comment: 4 pages, 3 figure
    corecore