109 research outputs found

    After-School Programs for High School Students: An Evaluation of After School Matters

    Get PDF
    Evaluates outcomes for teens in Chicago's After School Matters apprenticeship-like program, finding statistically significant benefits on some measures of youth development and reduced problem behaviors but not in job skills or school performance

    A basic introduction to fixed-effect and random-effects models for meta-analysis

    Get PDF
    There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models

    A meta-analysis of the investment-uncertainty relationship

    Get PDF
    In this article we use meta-analysis to investigate the investment-uncertainty relationship. We focus on the direction and statistical significance of empirical estimates. Specifically, we estimate an ordered probit model and transform the estimated coefficients into marginal effects to reflect the changes in the probability of finding a significantly negative estimate, an insignificant estimate, or a significantly positive estimate. Exploratory data analysis shows that there is little empirical evidence for a positive relationship. The regression results suggest that the source of uncertainty, the level of data aggregation, the underlying model specification, and differences between short- and long-run effects are important sources of variation in study outcomes. These findings are, by and large, robust to the introduction of a trend variable to capture publication trends in the literature. The probability of finding a significantly negative relationship is higher in more recently published studies. JEL Classification: D21, D80, E22 1

    [Comment] Redefine statistical significance

    Get PDF
    The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems. For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change. We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below). We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance

    Willingness to Pay to Reduce Mortality Risks: Evidence from a Three-Country Contingent Valuation Study

    Full text link
    corecore