45 research outputs found

    Percent of papers complying with Execution criteria by review discipline.

    No full text
    The percent of papers complying with each Execution criterion is plotted for each review paper. The colors indicate different subdisciplines of the review papers. The Review ID corresponds to the papers listed in Table 1 of the main manuscript. (TIF)</p

    Overlap between review papers.

    No full text
    Additional details and R code used to calculate overlap between review papers, including overlap matrices for each criterion and R code for S1 and S2 Figs. (PDF)</p

    Distribution of paper overlap for reporting criteria.

    No full text
    Distribution of the number of papers shared between reviews for all the Reporting criteria combined. (TIF)</p

    Percent of papers complying with Execution criteria as a function of the time period analyzed by the review paper.

    No full text
    Each panel represents an Execution criterion. The line segment indicates the time period covered by each of the review papers that addressed a particular criterion. (TIF)</p

    Percent of papers that acknowledged non-independence, addressed it, and which methods they used to deal with non-independence.

    No full text
    (A) Percent of papers that acknowledged at least one type of independence in their data (“yes”) or did not acknowledge non-independence (“no”). (B) For the papers that did acknowledge non-independence, the sources of non-independence were classified as “study” or “sample”. (C) The percent of papers that addressed at least one type of non-independence (“yes”), or did not address non-independence (“no”). (D) For the papers that did address non-independence, we show the methods used to address non-independence, classified as: “Choose”, when the authors chose one value from multiple non-independent values; “average”, when the non-independent values were averaged; “model”, when the authors accounted for non-independence within the meta-analytic model, and “tested” when the authors tested for the effects of non-independence. The papers that used more than one method (or source) were counted in each category, so the percentages between levels of panels B and D sum to greater than 100%.</p

    Data compilation from previous review papers.

    No full text
    This Microsoft Excel Worksheet (.xlsx) includes information on the quality of Reporting and Execution criteria compiled from the review papers listed in Table 1. (XLSX)</p

    New data from Pappalardo et al. [1].

    No full text
    This Microsoft Excel Worksheet (.xlsx) includes additional data collected by re reviewing the ecological meta-analysis compiled by Pappalardo et al. [1]. Please cite this publication and Pappalardo et al. [1] if you are using the data in this file for your research. (CSV)</p

    List of references in reviews.

    No full text
    This Microsoft Excel Worksheet (.xlsx) includes the compilation of all the references analyzed by previous reviews. (XLSX)</p

    Compilation of 18 papers that reviewed the quality of reporting in ecological meta-analyses.

    No full text
    Compilation of 18 papers that reviewed the quality of reporting in ecological meta-analyses.</p

    PRISMA 2020 checklist.

    No full text
    Quantitatively summarizing results from a collection of primary studies with meta-analysis can help answer ecological questions and identify knowledge gaps. The accuracy of the answers depends on the quality of the meta-analysis. We reviewed the literature assessing the quality of ecological meta-analyses to evaluate current practices and highlight areas that need improvement. From each of the 18 review papers that evaluated the quality of meta-analyses, we calculated the percentage of meta-analyses that met criteria related to specific steps taken in the meta-analysis process (i.e., execution) and the clarity with which those steps were articulated (i.e., reporting). We also re-evaluated all the meta-analyses available from Pappalardo et al. [1] to extract new information on ten additional criteria and to assess how the meta-analyses recognized and addressed non-independence. In general, we observed better performance for criteria related to reporting than for criteria related to execution; however, there was a wide variation among criteria and meta-analyses. Meta-analyses had low compliance with regard to correcting for phylogenetic non-independence, exploring temporal trends in effect sizes, and conducting a multifactorial analysis of moderators (i.e., explanatory variables). In addition, although most meta-analyses included multiple effect sizes per study, only 66% acknowledged some type of non-independence. The types of non-independence reported were most often related to the design of the original experiment (e.g., the use of a shared control) than to other sources (e.g., phylogeny). We suggest that providing specific training and encouraging authors to follow the PRISMA EcoEvo checklist recently developed by O’Dea et al. [2] can improve the quality of ecological meta-analyses.</div
    corecore