116 research outputs found
Clinical Trial Registration Patterns and Changes in Primary Outcomes of Randomized Clinical Trials from 2002 to 2017
This cross-sectional study evaluates the existence and timing of trial registration for randomized clinical trials (RCTs) published from 2002 to 2017 as well as substantive changes to the primary outcomes entered into registry information after those studies started
How to conduct a systematic review and meta-analysis of prognostic model studies
Background: Prognostic models are typically developed to estimate the risk that an individual in a particular health state will develop a particular health outcome, to support (shared) decision making. Systematic reviews of prognostic model studies can help identify prognostic models that need to further be validated or are ready to be implemented in healthcare. Objectives: To provide a step-by-step guidance on how to conduct and read a systematic review of prognostic model studies and to provide an overview of methodology and guidance available for every step of the review progress. Sources: Published, peer-reviewed guidance articles. Content: We describe the following steps for conducting a systematic review of prognosis studies: 1) Developing the review question using the Population, Index model, Comparator model, Outcome(s), Timing, Setting format, 2) Searching and selection of articles, 3) Data extraction using the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) checklist, 4) Quality and risk of bias assessment using the Prediction model Risk Of Bias ASsessment (PROBAST) tool, 5) Analysing data and undertaking quantitative meta-analysis, and 6) Presenting summary of findings, interpreting results, and drawing conclusions. Guidance for each step is described and illustrated using a case study on prognostic models for patients with COVID-19. Implications: Guidance for conducting a systematic review of prognosis studies is available, but the implications of these reviews for clinical practice and further research highly depend on complete reporting of primary studies
Prognostic models for radiation-induced complications after radiotherapy in head and neck cancer patients
Objectives: This is a protocol for a Cochrane Review (prognosis). The objectives are as follows:. Primary objective The review question is “Which prognostic models are available to predict the risk of radiation-induced side effects after radiation exposure to patients with head and neck cancer, what is their quality, and what is their predictive performance?”. Investigation of sources of heterogeneity between studies We will assess sources of heterogeneity among the prognostic models developed in the eligible studies. The potential sources are study population (e.g. site/stage of cancer, the use of other treatment [surgery and chemotherapy]), predictors, definition and incidence of the predicted outcomes, and prediction horizons. If there are multiple validation studies for the same model, the same sources of between-study heterogeneity will be investigated
The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement
Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed
Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)
Most clinical specialties have a plethora of studies that develop or validate one or more prediction models, for example, to inform diagnosis or prognosis. Having many prediction model studies in a particular clinical field motivates the need for systematic reviews and meta-analyses, to evaluate and summarise the overall evidence available from prediction model studies, in particular about the predictive performance of existing models. Such reviews are fast emerging, and should be reported completely, transparently, and accurately. To help ensure this type of reporting, this article describes a new reporting guideline for systematic reviews and meta-analyses of prediction model research
Systematic review finds "spin" practices and poor reporting standards in studies on machine learning-based prediction models
Objectives
We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques.
Study Design and Setting
We systematically searched PubMed from 01/2018 to 12/2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty.
Results
We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6% [95% CI 63.4–83.3]) and 53/81 main texts (65.4% [95% CI 54.6–74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3–99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2–63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1–14.1]) studies.
Conclusion
Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies
Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
Objectives
In biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for individualized risk prediction.
Study Design and Setting
We conducted a systematic review, searching MEDLINE and EMBASE for oncology-related studies that developed and validated a prognostic model using machine learning published between 1st January, 2019, and 5th September, 2019. We used existing spin frameworks and described areas of highly suggestive spin practices.
Results
We included 62 publications (including 152 developed models; 37 validated models). Reporting was inconsistent between methods and the results in 27% of studies due to additional analysis and selective reporting. Thirty-two studies (out of 36 applicable studies) reported comparisons between developed models in their discussion and predominantly used discrimination measures to support their claims (78%). Thirty-five studies (56%) used an overly strong or leading word in their title, abstract, results, discussion, or conclusion.
Conclusion
The potential for spin needs to be considered when reading, interpreting, and using studies that developed and validated prognostic models in oncology. Researchers should carefully report their prognostic model research using words that reflect their actual results and strength of evidence
Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models
Background and Objectives
We sought to summarize the study design, modelling strategies, and performance measures reported in studies on clinical prediction models developed using machine learning techniques.
Methods
We search PubMed for articles published between 01/01/2018 and 31/12/2019, describing the development or the development with external validation of a multivariable prediction model using any supervised machine learning technique. No restrictions were made based on study design, data source, or predicted patient-related health outcomes.
Results
We included 152 studies, 58 (38.2% [95% CI 30.8–46.1]) were diagnostic and 94 (61.8% [95% CI 53.9–69.2]) prognostic studies. Most studies reported only the development of prediction models (n = 133, 87.5% [95% CI 81.3–91.8]), focused on binary outcomes (n = 131, 86.2% [95% CI 79.8–90.8), and did not report a sample size calculation (n = 125, 82.2% [95% CI 75.4–87.5]). The most common algorithms used were support vector machine (n = 86/522, 16.5% [95% CI 13.5–19.9]) and random forest (n = 73/522, 14% [95% CI 11.3–17.2]). Values for area under the Receiver Operating Characteristic curve ranged from 0.45 to 1.00. Calibration metrics were often missed (n = 494/522, 94.6% [95% CI 92.4–96.3]).
Conclusion
Our review revealed that focus is required on handling of missing values, methods for internal validation, and reporting of calibration to improve the methodological conduct of studies on machine learning–based prediction models.
Systematic review registration
PROSPERO, CRD42019161764
Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models
Background and ObjectivesWe sought to summarize the study design, modelling strategies, and performance measures reported in studies on clinical prediction models developed using machine learning techniques.MethodsWe search PubMed for articles published between 01/01/2018 and 31/12/2019, describing the development or the development with external validation of a multivariable prediction model using any supervised machine learning technique. No restrictions were made based on study design, data source, or predicted patient-related health outcomes.ResultsWe included 152 studies, 58 (38.2% [95% CI 30.8–46.1]) were diagnostic and 94 (61.8% [95% CI 53.9–69.2]) prognostic studies. Most studies reported only the development of prediction models (n = 133, 87.5% [95% CI 81.3–91.8]), focused on binary outcomes (n = 131, 86.2% [95% CI 79.8–90.8), and did not report a sample size calculation (n = 125, 82.2% [95% CI 75.4–87.5]). The most common algorithms used were support vector machine (n = 86/522, 16.5% [95% CI 13.5–19.9]) and random forest (n = 73/522, 14% [95% CI 11.3–17.2]). Values for area under the Receiver Operating Characteristic curve ranged from 0.45 to 1.00. Calibration metrics were often missed (n = 494/522, 94.6% [95% CI 92.4–96.3]).ConclusionOur review revealed that focus is required on handling of missing values, methods for internal validation, and reporting of calibration to improve the methodological conduct of studies on machine learning–based prediction models
Outcomes of Minimally Invasive Thyroid Surgery - A Systematic Review and Meta-Analysis
Purpose: Conventional thyroidectomy has been standard of care for surgical thyroid nodules. For cosmetic purposes different minimally invasive and remote-access surgical approaches have been developed. At present, the most used robotic and endoscopic thyroidectomy approaches are minimally invasive video assisted thyroidectomy (MIVAT), bilateral axillo-breast approach endoscopic thyroidectomy (BABA-ET), bilateral axillo-breast approach robotic thyroidectomy (BABA-RT), transoral endoscopic thyroidectomy via vestibular approach (TOETVA), retro-auricular endoscopic thyroidectomy (RA-ET), retro-auricular robotic thyroidectomy (RA-RT), gasless transaxillary endoscopic thyroidectomy (GTET) and robot assisted transaxillary surgery (RATS). The purpose of this systematic review was to evaluate whether minimally invasive techniques are not inferior to conventional thyroidectomy. Methods: A systematic search was conducted in Medline, Embase and Web of Science to identify original articles investigating operating time, length of hospital stay and complication rates regarding recurrent laryngeal nerve injury and hypocalcemia, of the different minimally invasive techniques. Results: Out of 569 identified manuscripts, 98 studies met the inclusion criteria. Most studies were retrospective in nature. The results of the systematic review varied. Thirty-one articles were included in the meta-analysis. Compared to the standard of care, the meta-analysis showed no significant difference in length of hospital stay, except a longer stay after BABA-ET. No significant difference in incidence of recurrent laryngeal nerve injury and hypocalcemia was seen. As expected, operating time was significantly longer for most minimally invasive techniques. Conclusions: This is the first comprehensive systematic review and meta-analysis comparing the eight most commonly used minimally invasive thyroid surgeries individually with standard of care. It can be concluded that minimally invasive techniques do not lead to more complications or longer hospital stay and are, therefore, not inferior to conventional thyroidectomy
- …