16 research outputs found

    Effect estimates can be accurately calculated with data digitally extracted from interrupted time series graphs

    Get PDF
    Interrupted time series (ITS) studies are frequently used to examine the impact of population-level interventions or exposures. Systematic reviews with meta-analyses including ITS designs may inform public health and policy decision-making. Re-analysis of ITS may be required for inclusion in meta-analysis. While publications of ITS rarely provide raw data for re-analysis, graphs are often included, from which time series data can be digitally extracted. However, the accuracy of effect estimates calculated from data digitally extracted from ITS graphs is currently unknown. Forty-three ITS with available datasets and time series graphs were included. Time series data from each graph was extracted by four researchers using digital data extraction software. Data extraction errors were analysed. Segmented linear regression models were fitted to the extracted and provided datasets, from which estimates of immediate level and slope change (and associated statistics) were calculated and compared across the datasets. Although there were some data extraction errors of time points, primarily due to complications in the original graphs, they did not translate into important differences in estimates of interruption effects (and associated statistics). Using digital data extraction to obtain data from ITS graphs should be considered in reviews including ITS. Including these studies in meta-analyses, even with slight inaccuracy, is likely to outweigh the loss of information from non-inclusion.Simon Lee Turner, Elizabeth Korevaar, Miranda S. Cumpston, Raju Kanukula, Andrew B. Forbes, Joanne E. McKenzi

    Data and code availability statements in systematic reviews of interventions were often missing or inaccurate: a content analysis

    No full text
    Objectives: To estimate the frequency of data and code availability statements in a random sample of systematic reviews with metaanalysis of aggregate data, summarize the content of the statements and investigate how often data and code files were shared. Methods: We searched for systematic reviews with meta-analysis of aggregate data on the effects of a health, social, behavioral, or educational intervention that were indexed in PubMed, Education Collection via ProQuest, Scopus via Elsevier, or Social Sciences Citation Index and Science Citation Index Expanded via Web of Science during a 4-week period (between November 2, and December 2, 2020). Records were randomly sorted and screened independently by two authors until our target sample of 300 systematic reviews was reached. Two authors independently recorded whether a data or code availability statement (or both) appeared in each review and coded the content of the statements using an inductive approach. Results: Of the 300 included systematic reviews with meta-analysis, 86 (29%) had a data availability statement, and seven (2%) had both a data and code availability statement. In 12/93 (13%) data availability statements, authors stated that data files were available for download from the journal website or a data repository, which we verified as being true. While 39/93 (42%) authors stated data were available upon request, 37/93 (40%) implied that sharing of data files was not necessary or applicable to them, most often because ‘‘all data appear in the article’’ or ‘‘no datasets were generated or analyzed’’. Discussion: Data and code availability statements appear infrequently in systematic review manuscripts. Authors who do provide a data availability statement often incorrectly imply that data sharing is not applicable to systematic reviews. Our results suggest the need for various interventions to increase data and code sharing by systematic reviewers.Matthew J. Page, Phi-Yen Nguyen, Daniel G. Hamilton, Neal R. Haddaway, Raju Kanukula, David Moher, Joanne E. McKenzi
    corecore