18 research outputs found
Reporting guidelines used varying methodology to develop recommendations
Background and Objectives
We investigated the developing methods of reporting guidelines in the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network's database.
Methods
In October 2018, we screened all records and excluded those not describing reporting guidelines from further investigation. Twelve researchers performed duplicate data extraction on bibliometrics, scope, development methods, presentation, and dissemination of all publications. Descriptive statistics were used to summarize the findings.
Results
Of the 405 screened records, 262 described a reporting guidelines development. The number of reporting guidelines increased over the past 3 decades, from 5 in the 1990s and 63 in the 2000s to 157 in the 2010s. Development groups included 2–151 people. Literature appraisal was performed during the development of 56% of the reporting guidelines; 33% used surveys to gather external opinion on items to report; and 42% piloted or sought external feedback on their recommendations. Examples of good reporting for all reporting items were presented in 30% of the reporting guidelines. Eighteen percent of the reviewed publications included some level of spin.
Conclusion
Reporting guidelines have been developed with varying methodology. Reporting guideline developers should use existing guidance and take an evidence-based approach, rather than base their recommendations on expert opinion of limited groups of individuals
Evaluation of clinical prediction models (part 1):from development to external validation
Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance
Evaluation of clinical prediction models (part 1): from development to external validation
Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance
Reminding peer reviewers of reporting guideline items to improve completeness in published articles: primary results of 2 randomized trials
Importance: Numerous studies have shown that adherence to reporting guidelines is suboptimal.
Objective: To evaluate whether asking peer reviewers to check if specific reporting guideline items were adequately reported would improve adherence to reporting guidelines in published articles.
Design, Setting, and Participants: Two parallel-group, superiority randomized trials were performed using manuscripts submitted to 7 biomedical journals (5 from the BMJ Publishing Group and 2 from the Public Library of Science) as the unit of randomization, with peer reviewers allocated to the intervention or control group.
Interventions: The first trial (CONSORT-PR) focused on manuscripts that presented randomized clinical trial (RCT) results and reported following the Consolidated Standards of Reporting Trials (CONSORT) guideline, and the second trial (SPIRIT-PR) focused on manuscripts that presented RCT protocols and reported following the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. The CONSORT-PR trial included manuscripts that described RCT primary results (submitted July 2019 to July 2021). The SPIRIT-PR trial included manuscripts that contained RCT protocols (submitted June 2020 to May 2021). Manuscripts in both trials were randomized (1:1) to the intervention or control group; the control group received usual journal practice. In the intervention group of both trials, peer reviewers received an email from the journal that asked them to check whether the 10 most important and poorly reported CONSORT (for CONSORT-PR) or SPIRIT (for SPIRIT-PR) items were adequately reported in the manuscript. Peer reviewers and authors were not informed of the purpose of the study, and outcome assessors were blinded.
Main Outcomes and Measures: The difference in the mean proportion of adequately reported 10 CONSORT or SPIRIT items between the intervention and control groups in published articles.
Results: In the CONSORT-PR trial, 510 manuscripts were randomized. Of those, 243 were published (122 in the intervention group and 121 in the control group). A mean proportion of 69.3% (95% CI, 66.0%-72.7%) of the 10 CONSORT items were adequately reported in the intervention group and 66.6% (95% CI, 62.5%-70.7%) in the control group (mean difference, 2.7%; 95% CI, −2.6% to 8.0%). In the SPIRIT-PR trial, of the 244 randomized manuscripts, 178 were published (90 in the intervention group and 88 in the control group). A mean proportion of 46.1% (95% CI, 41.8%-50.4%) of the 10 SPIRIT items were adequately reported in the intervention group and 45.6% (95% CI, 41.7% to 49.4%) in the control group (mean difference, 0.5%; 95% CI, −5.2% to 6.3%).
Conclusions and Relevance: These 2 randomized trials found that it was not useful to implement the tested intervention to increase reporting completeness in published articles. Other interventions should be assessed and considered in the future.
Trial Registration: ClinicalTrials.gov Identifiers: NCT05820971 (CONSORT-PR) and NCT05820984 (SPIRIT-PR
Completeness of Reporting in Diet- and Nutrition-Related Randomized Controlled Trials and Systematic Reviews With Meta-Analysis:Protocol for 2 Independent Meta-Research Studies
Background: Journal articles describing randomized controlled trials (RCTs) and systematic reviews with meta-analysis of RCTs are not optimally reported and often miss crucial details. This poor reporting makes assessing these studies’ risk of bias or reproducing their results difficult. However, the reporting quality of diet- and nutrition-related RCTs and meta-analyses has not been explored.
Objective: We aimed to assess the reporting completeness and identify the main reporting limitations of diet- and nutrition-related RCTs and meta-analyses of RCTs, estimate the frequency of reproducible research practices among these RCTs, and estimate the frequency of distorted presentation or spin among these meta-analyses.
Methods: Two independent meta-research studies will be conducted using articles published in PubMed-indexed journals. The first will include a sample of diet- and nutrition-related RCTs; the second will include a sample of systematic reviews with meta-analysis of diet- and nutrition-related RCTs. A validated search strategy will be used to identify RCTs of nutritional interventions and an adapted strategy to identify meta-analyses in PubMed. We will search for RCTs and meta-analyses indexed in 1 calendar year and randomly select 100 RCTs (June 2021 to June 2022) and 100 meta-analyses (July 2021 to July 2022). Two reviewers will independently screen the titles and abstracts of records yielded by the searches, then read the full texts to confirm their eligibility. The general features of these published RCTs and meta-analyses will be extracted into a research electronic data capture database (REDCap; Vanderbilt University). The completeness of reporting of each RCT will be assessed using the items in the CONSORT (Consolidated Standards of Reporting Trials), its extensions, and the TIDieR (Template for Intervention Description and Replication) statements. Information about practices that promote research transparency and reproducibility, such as the publication of protocols and statistical analysis plans will be collected. There will be an assessment of the completeness of reporting of each meta-analysis using the items in the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) statement and collection of information about spin in the abstracts and full-texts. The results will be presented as descriptive statistics in diagrams or tables. These 2 meta-research studies are registered in the Open Science Framework.
Results: The literature search for the first meta-research retrieved 20,030 records and 2182 were potentially eligible. The literature search for the second meta-research retrieved 10,918 records and 850 were potentially eligible. Among them, random samples of 100 RCTs and 100 meta-analyses were selected for data extraction. Data extraction is currently in progress, and completion is expected by the beginning of 2023.
Conclusions: Our meta-research studies will summarize the main limitation on reporting completeness of nutrition- or diet-related RCTs and meta-analyses and provide comprehensive information regarding the particularities in the reporting of intervention studies in the nutrition field
Completeness of reporting in diet- and nutrition-related randomized controlled trials and systematic reviews with meta-analysis: protocol for 2 independent meta-research studies
Background: Journal articles describing randomized controlled trials (RCTs) and systematic reviews with meta-analysis of RCTs are not optimally reported and often miss crucial details. This poor reporting makes assessing these studies’ risk of bias or reproducing their results difficult. However, the reporting quality of diet- and nutrition-related RCTs and meta-analyses has not been explored.
Objective: We aimed to assess the reporting completeness and identify the main reporting limitations of diet- and nutrition-related RCTs and meta-analyses of RCTs, estimate the frequency of reproducible research practices among these RCTs, and estimate the frequency of distorted presentation or spin among these meta-analyses.
Methods: Two independent meta-research studies will be conducted using articles published in PubMed-indexed journals. The first will include a sample of diet- and nutrition-related RCTs; the second will include a sample of systematic reviews with meta-analysis of diet- and nutrition-related RCTs. A validated search strategy will be used to identify RCTs of nutritional interventions and an adapted strategy to identify meta-analyses in PubMed. We will search for RCTs and meta-analyses indexed in 1 calendar year and randomly select 100 RCTs (June 2021 to June 2022) and 100 meta-analyses (July 2021 to July 2022). Two reviewers will independently screen the titles and abstracts of records yielded by the searches, then read the full texts to confirm their eligibility. The general features of these published RCTs and meta-analyses will be extracted into a research electronic data capture database (REDCap; Vanderbilt University). The completeness of reporting of each RCT will be assessed using the items in the CONSORT (Consolidated Standards of Reporting Trials), its extensions, and the TIDieR (Template for Intervention Description and Replication) statements. Information about practices that promote research transparency and reproducibility, such as the publication of protocols and statistical analysis plans will be collected. There will be an assessment of the completeness of reporting of each meta-analysis using the items in the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) statement and collection of information about spin in the abstracts and full-texts. The results will be presented as descriptive statistics in diagrams or tables. These 2 meta-research studies are registered in the Open Science Framework.
Results: The literature search for the first meta-research retrieved 20,030 records and 2182 were potentially eligible. The literature search for the second meta-research retrieved 10,918 records and 850 were potentially eligible. Among them, random samples of 100 RCTs and 100 meta-analyses were selected for data extraction. Data extraction is currently in progress, and completion is expected by the beginning of 2023.
Conclusions: Our meta-research studies will summarize the main limitation on reporting completeness of nutrition- or diet-related RCTs and meta-analyses and provide comprehensive information regarding the particularities in the reporting of intervention studies in the nutrition field.
International Registered Report Identifier (IRRID): DERR1-10.2196/4353
Protocol for a meta-research study of protocols for diet or nutrition-related trials published in indexed journals:general aspects of study design, rationale and reporting limitations
INTRODUCTION: The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) reporting guideline establishes a minimum set of items to be reported in any randomised controlled trial (RCT) protocol. The Template for Intervention Description and Replication (TIDieR) reporting guideline was developed to improve the reporting of interventions in RCT protocols and results papers. Reporting completeness in protocols of diet or nutrition-related RCTs has not been systematically investigated. We aim to identify published protocols of diet or nutrition-related RCTs, assess their reporting completeness and identify the main reporting limitations remaining in this field. METHODS AND ANALYSIS: We will conduct a meta-research study of RCT protocols published in journals indexed in at least one of six selected databases between 2012 and 2022. We have run a search in PubMed, Embase, CINAHL, Web of Science, PsycINFO and Global Health using a search strategy designed to identify protocols of diet or nutrition-related RCTs. Two reviewers will independently screen the titles and abstracts of records yielded by the search in Rayyan. The full texts will then be read to confirm protocol eligibility. We will collect general study features (publication information, types of participants, interventions, comparators, outcomes and study design) of all eligible published protocols in this contemporary sample. We will assess reporting completeness in a randomly selected sample of them and identify their main reporting limitations. We will compare this subsample with the items in the SPIRIT and TIDieR statements. For all data collection, we will use data extraction forms in REDCap. This protocol is registered on the Open Science Framework (DOI: 10.17605/OSF.IO/YWEVS). ETHICS AND DISSEMINATION: This study will undertake a secondary analysis of published data and does not require ethical approval. The results will be disseminated through journals and conferences targeting stakeholders involved in nutrition research
Evaluation of clinical prediction models (part 1): from development to external validation
Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance
"If we use the strength of diversity among researchers we can only improve the quality and impact of our research": Issues of equality, diversity, inclusion, and transparency in the process of applying for research funding
This paper sets out the recommendations that have emerged from a six-month-long exploration and discussion of the processes that take place before research is submitted for funding: the ‘pre-award’ environment. Our work concentrated on how this environment is experienced by researchers at all career stages and from a variety of backgrounds, demographics, and disciplines, as well as by research managers and research support professionals. In the later stages of our exploration, representatives from research funders were also involved in the discussions.
The primary component of this project was an analysis of pre-award activities and processes at UK universities, using information collated from workshops with researchers and research management and support staff. The findings of this analysis were presented as a workflow diagram, which was then used to surface issues relating to equality, diversity, inclusion, and transparency in context. The workflow diagram and the issues highlighted by it were used to structure discussions at a symposium for a range of research stakeholders, held in Bristol, UK, in January 2023. The recommendations set out in this paper are drawn from discussions that took place at that event.
This paper is not an exhaustive landscape analysis, nor a review of existing research and practice in the area of pre-award processes or of recent thinking on the topics of equality, diversity, and inclusion (EDI). Instead, it aims to summarise and encapsulate the suggestions put forward by the stakeholders during the symposium. These recommendations, from experienced professionals working in the field, are based on their encounters with the issues raised in the project. They do not solely relate to those working on pre-award processes, but may also apply to funders, policymakers, university leaders, and professional associations, since many of the challenges flagged in our research are systemic and cultural, and reach far beyond the research office