7 research outputs found

    A scoping review of comparisons between abstracts and full reports in primary biomedical research

    No full text
    Abstract Background Evidence shows that research abstracts are commonly inconsistent with their corresponding full reports, and may mislead readers. In this scoping review, which is part of our series on the state of reporting of primary biomedical research, we summarized the evidence from systematic reviews and surveys, to investigate the current state of inconsistent abstract reporting, and to evaluate factors associated with improved reporting by comparing abstracts and their full reports. Methods We searched EMBASE, Web of Science, MEDLINE, and CINAHL from January 1st 1996 to September 30th 2016 to retrieve eligible systematic reviews and surveys. Our primary outcome was the level of inconsistency between abstracts and corresponding full reports, which was expressed as a percentage (with a lower percentage indicating better reporting) or categorized rating (such as major/minor difference, high/medium/low inconsistency), as reported by the authors. We used medians and interquartile ranges to describe the level of inconsistency across studies. No quantitative syntheses were conducted. Data from the included systematic reviews or surveys was summarized qualitatively. Results Seventeen studies that addressed this topic were included. The level of inconsistency was reported to have a median of 39% (interquartile range: 14% - 54%), and to range from 4% to 78%. In some studies that separated major from minor inconsistency, the level of major inconsistency ranged from 5% to 45% (median: 19%, interquartile range: 7% - 31%), which included discrepancies in specifying the study design or sample size, designating a primary outcome measure, presenting main results, and drawing a conclusion. A longer time interval between conference abstracts and the publication of full reports was found to be the only factor which was marginally or significantly associated with increased likelihood of reporting inconsistencies. Conclusions This scoping review revealed that abstracts are frequently inconsistent with full reports, and efforts are needed to improve the consistency of abstract reporting in the primary biomedical community

    A systematic review of comparisons between protocols or registrations and full reports in primary biomedical research

    No full text
    Abstract Background Prospective study protocols and registrations can play a significant role in reducing incomplete or selective reporting of primary biomedical research, because they are pre-specified blueprints which are available for the evaluation of, and comparison with, full reports. However, inconsistencies between protocols or registrations and full reports have been frequently documented. In this systematic review, which forms part of our series on the state of reporting of primary biomedical, we aimed to survey the existing evidence of inconsistencies between protocols or registrations (i.e., what was planned to be done and/or what was actually done) and full reports (i.e., what was reported in the literature); this was based on findings from systematic reviews and surveys in the literature. Methods Electronic databases, including CINAHL, MEDLINE, Web of Science, and EMBASE, were searched to identify eligible surveys and systematic reviews. Our primary outcome was the level of inconsistency (expressed as a percentage, with higher percentages indicating greater inconsistency) between protocols or registration and full reports. We summarized the findings from the included systematic reviews and surveys qualitatively. Results There were 37 studies (33 surveys and 4 systematic reviews) included in our analyses. Most studies (n = 36) compared protocols or registrations with full reports in clinical trials, while a single survey focused on primary studies of clinical trials and observational research. High inconsistency levels were found in outcome reporting (ranging from 14% to 100%), subgroup reporting (from 12% to 100%), statistical analyses (from 9% to 47%), and other measure comparisons. Some factors, such as outcomes with significant results, sponsorship, type of outcome and disease speciality were reported to be significantly related to inconsistent reporting. Conclusions We found that inconsistent reporting between protocols or registrations and full reports of primary biomedical research is frequent, prevalent and suboptimal. We also identified methodological issues such as the need for consensus on measuring inconsistency across sources for trial reports, and more studies evaluating transparency and reproducibility in reporting all aspects of study design and analysis. A joint effort involving authors, journals, sponsors, regulators and research ethics committees is required to solve this problem

    Supplemental Material, Appendix_2 - Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials

    No full text
    <p>Supplemental Material, Appendix_2 for Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials by Meha Bhatt, Laura Zielinski, Nitika Sanger, Ieta Shams, Candice Luo, Bianca Bantoto, Hamnah Shahid, Guowei Li, Luciana P. F. Abbade, Ikunna Nwosu, Yanling Jin, Mei Wang, Yaping Chang, Guangwen Sun, Lawrence Mbuagbaw, Mitchell A. H. Levine, Jonathan D. Adachi, Lehana Thabane, and Zainab Samaan in Research on Social Work Practice</p

    Supplemental Material, Appendix_1 - Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials

    No full text
    <p>Supplemental Material, Appendix_1 for Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials by Meha Bhatt, Laura Zielinski, Nitika Sanger, Ieta Shams, Candice Luo, Bianca Bantoto, Hamnah Shahid, Guowei Li, Luciana P. F. Abbade, Ikunna Nwosu, Yanling Jin, Mei Wang, Yaping Chang, Guangwen Sun, Lawrence Mbuagbaw, Mitchell A. H. Levine, Jonathan D. Adachi, Lehana Thabane, and Zainab Samaan in Research on Social Work Practice</p

    Supplemental Material, Appendix_3 - Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials

    No full text
    <p>Supplemental Material, Appendix_3 for Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials by Meha Bhatt, Laura Zielinski, Nitika Sanger, Ieta Shams, Candice Luo, Bianca Bantoto, Hamnah Shahid, Guowei Li, Luciana P. F. Abbade, Ikunna Nwosu, Yanling Jin, Mei Wang, Yaping Chang, Guangwen Sun, Lawrence Mbuagbaw, Mitchell A. H. Levine, Jonathan D. Adachi, Lehana Thabane, and Zainab Samaan in Research on Social Work Practice</p

    Supplemental Material, Supplementary_Material - A Systematic Survey of Control Groups in Behavioral and Social Science Trials

    No full text
    <p>Supplemental Material, Supplementary_Material for A Systematic Survey of Control Groups in Behavioral and Social Science Trials by Mei Wang, Guangwen Sun, Yaping Chang, Yanling Jin, Alvin Leenus, Muhammad Maaz, Guowei Li, Meha Bhatt, Luciana P. F. Abbade, Ikunna Nwosu, Laura Zielinski, Nitika Sanger, Bianca Bantoto, Candice Luo, Ieta Shams, Hamnah Shahid, Jonathan Adachi, Lawrence Mbuagbaw, Mitchell Levine, Zainab Samaan, and Lehana Thabane in Research on Social Work Practice</p
    corecore