135 research outputs found

    After-School Programs for High School Students: An Evaluation of After School Matters

    Get PDF
    Evaluates outcomes for teens in Chicago's After School Matters apprenticeship-like program, finding statistically significant benefits on some measures of youth development and reduced problem behaviors but not in job skills or school performance

    The Question of School Resources and Student Achievement: A History and Reconsideration

    Get PDF
    One question posed continually over the past century of education research is to what extent school resources affect student outcomes. From the turn of the century to the present, a diverse set of actors, including politicians, physicians, and researchers from a number of disciplines, have studied whether and how money that is provided for schools translates into increased student achievement. The authors discuss the historical origins of the question of whether school resources relate to student achievement, and report the results of a meta- analysis of studies examining that relationship. They find that policymakers, researchers, and other stakeholders have addressed this question using diverse strategies. The way the question is asked, and the methods used to answer it, is shaped by history, as well by the scholarly, social, and political concerns of any given time. The diversity of methods has resulted in a body of literature too diverse and too inconsistent to yield reliable inferences through meta-analysis. The authors suggest that a collaborative approach addressing the question from a variety of disciplinary and practice perspectives may lead to more effective interventions to meet the needs of all students

    A basic introduction to fixed-effect and random-effects models for meta-analysis

    Get PDF
    There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models

    Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.

    Get PDF
    BACKGROUND: Randomised controlled trials (RCTs) are used to evaluate social and psychological interventions and inform policy decisions about them. Accurate, complete, and transparent reports of social and psychological intervention RCTs are essential for understanding their design, conduct, results, and the implications of the findings. However, the reporting of RCTs of social and psychological interventions remains suboptimal. The CONSORT Statement has improved the reporting of RCTs in biomedicine. A similar high-quality guideline is needed for the behavioural and social sciences. Our objective was to develop an official extension of the Consolidated Standards of Reporting Trials 2010 Statement (CONSORT 2010) for reporting RCTs of social and psychological interventions: CONSORT-SPI 2018. METHODS: We followed best practices in developing the reporting guideline extension. First, we conducted a systematic review of existing reporting guidelines. We then conducted an online Delphi process including 384 international participants. In March 2014, we held a 3-day consensus meeting of 31 experts to determine the content of a checklist specifically targeting social and psychological intervention RCTs. Experts discussed previous research and methodological issues of particular relevance to social and psychological intervention RCTs. They then voted on proposed modifications or extensions of items from CONSORT 2010. RESULTS: The CONSORT-SPI 2018 checklist extends 9 of the 25 items from CONSORT 2010: background and objectives, trial design, participants, interventions, statistical methods, participant flow, baseline data, outcomes and estimation, and funding. In addition, participants added a new item related to stakeholder involvement, and they modified aspects of the flow diagram related to participant recruitment and retention. CONCLUSIONS: Authors should use CONSORT-SPI 2018 to improve reporting of their social and psychological intervention RCTs. Journals should revise editorial policies and procedures to require use of reporting guidelines by authors and peer reviewers to produce manuscripts that allow readers to appraise study quality, evaluate the applicability of findings to their contexts, and replicate effective interventions

    [Comment] Redefine statistical significance

    Get PDF
    The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems. For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change. We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below). We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance

    A meta-analysis of the investment-uncertainty relationship

    Get PDF
    In this article we use meta-analysis to investigate the investment-uncertainty relationship. We focus on the direction and statistical significance of empirical estimates. Specifically, we estimate an ordered probit model and transform the estimated coefficients into marginal effects to reflect the changes in the probability of finding a significantly negative estimate, an insignificant estimate, or a significantly positive estimate. Exploratory data analysis shows that there is little empirical evidence for a positive relationship. The regression results suggest that the source of uncertainty, the level of data aggregation, the underlying model specification, and differences between short- and long-run effects are important sources of variation in study outcomes. These findings are, by and large, robust to the introduction of a trend variable to capture publication trends in the literature. The probability of finding a significantly negative relationship is higher in more recently published studies. JEL Classification: D21, D80, E22 1
    corecore