293 research outputs found

    Systematic Reviews of Genetic Association Studies

    Get PDF
    Gurdeep S. Sagoo and colleagues describe key components of the methodology for undertaking systematic reviews and meta-analyses of genetic association studies

    Data mining in clinical trial text: transformers for classification and question answering tasks

    Get PDF
    This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture’s use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature

    Machine learning to assist risk of bias assessments in systematic reviews

    Get PDF
    Background: Risk-of-bias assessments are now a standard component of systematic reviews. At present, reviewers need to manually identify relevant parts of research articles for a set of methodological elements that affect the risk of bias, in order to make a risk-of-bias judgement for each of these elements. We investigate the use of text mining methods to automate risk-of-bias assessments in systematic reviews. We aim to identify relevant sentences within the text of included articles, to rank articles by risk of bias and to reduce the number of risk-of-bias assessments that the reviewers need to perform by hand. Methods: We use supervised machine learning to train two types of models, for each of the three risk-of-bias properties of sequence generation, allocation concealment and blinding. The first model predicts whether a sentence in a research article contains relevant information. The second model predicts a risk-of-bias value for each research article. We use logistic regression, where each independent variable is the frequency of a word in a sentence or article, respectively. Results: We found that sentences can be successfully ranked by relevance with area under the receiver operating characteristic (ROC) curve (AUC) > 0.98. Articles can be ranked by risk of bias with AUC > 0.72. We estimate that more than 33% of articles can be assessed by just one reviewer, where two reviewers are normally required. Conclusions: We show that text mining can be used to assist risk-of-bias assessments

    CINeMA:Software for semi-automated assessment of the Confidence In the results of Network Meta-Analysis

    Get PDF
    Network meta‐analysis (NMA) compares several interventions that are linked in a network of comparative studies and estimates the relative treatment effects between all treatments, using both direct and indirect evidence. NMA is increasingly used for decision making in health care, however, a user‐friendly system to evaluate the confidence that can be placed in the results of NMA is currently lacking. This paper is a tutorial describing the Confidence In Network Meta‐Analysis (CINeMA) web application, which is based on the framework developed by Salanti et al (2014, PLOS One, 9, e99682) and refined by Nikolakopoulou et al (2019, bioRxiv). Six domains that affect the level of confidence in the NMA results are considered: (a) within‐study bias, (b) reporting bias, (c) indirectness, (d) imprecision, (e) heterogeneity, and (f) incoherence. CINeMA is freely available and open‐source and no login is required. In the configuration step users upload their data, produce network plots and define the analysis and effect measure. The dataset should include assessments of study‐level risk of bias and judgments on indirectness. CINeMA calls the netmeta routine in R to estimate relative effects and heterogeneity. Users are then guided through a systematic evaluation of the six domains. In this way reviewers assess the level of concerns for each relative treatment effect from NMA as giving rise to “no concerns,” “some concerns,” or “major concerns” in each of the six domains, which are graphically summarized on the report page for all effect estimates. Finally, judgments across the domains are summarized into a single confidence rating (“high,” “moderate,” “low,” or “very low”). In conclusion, the user‐friendly web‐based CINeMA platform provides a transparent framework to evaluate evidence from systematic reviews with multiple interventions

    Use of external evidence for design and Bayesian analysis of clinical trials:a qualitative study of trialists’ views

    Get PDF
    Abstract Background Evidence from previous studies is often used relatively informally in the design of clinical trials: for example, a systematic review to indicate whether a gap in the current evidence base justifies a new trial. External evidence can be used more formally in both trial design and analysis, by explicitly incorporating a synthesis of it in a Bayesian framework. However, it is unclear how common this is in practice or the extent to which it is considered controversial. In this qualitative study, we explored attitudes towards, and experiences of, trialists in incorporating synthesised external evidence through the Bayesian design or analysis of a trial. Methods Semi-structured interviews were conducted with 16 trialists: 13 statisticians and three clinicians. Participants were recruited across several universities and trials units in the United Kingdom using snowball and purposeful sampling. Data were analysed using thematic analysis and techniques of constant comparison. Results Trialists used existing evidence in many ways in trial design, for example, to justify a gap in the evidence base and inform parameters in sample size calculations. However, no one in our sample reported using such evidence in a Bayesian framework. Participants tended to equate Bayesian analysis with the incorporation of prior information on the intervention effect and were less aware of the potential to incorporate data on other parameters. When introduced to the concepts, many trialists felt they could be making more use of existing data to inform the design and analysis of a trial in particular scenarios. For example, some felt existing data could be used more formally to inform background adverse event rates, rather than relying on clinical opinion as to whether there are potential safety concerns. However, several barriers to implementing these methods in practice were identified, including concerns about the relevance of external data, acceptability of Bayesian methods, lack of confidence in Bayesian methods and software, and practical issues, such as difficulties accessing relevant data. Conclusions Despite trialists recognising that more formal use of external evidence could be advantageous over current approaches in some areas and useful as sensitivity analyses, there are still barriers to such use in practice

    Risk of neuropsychiatric adverse events associated with varenicline:systematic review and meta-analysis

    Get PDF
    Objective To determine the risk of neuropsychiatric adverse events associated with use of varenicline compared with placebo in randomised controlled trials. Design Systematic review and meta-analysis comparing study effects using two summary estimates in fixed effects models, risk differences, and Peto odds ratios. Data sources Medline, Embase, PsycINFO, the Cochrane Central Register of Controlled Trials (CENTRAL), and clinicaltrials.gov. Eligibility criteria for selecting studies Randomised controlled trials with a placebo comparison group that reported on neuropsychiatric adverse events (depression, suicidal ideation, suicide attempt, suicide, insomnia, sleep disorders, abnormal dreams, somnolence, fatigue, anxiety) and death. Studies that did not involve human participants, did not use the maximum recommended dose of varenicline (1 mg twice daily), and were cross over trials were excluded. Results In the 39 randomised controlled trials (10 761 participants), there was no evidence of an increased risk of suicide or attempted suicide (odds ratio 1.67, 95% confidence interval 0.33 to 8.57), suicidal ideation (0.58, 0.28 to 1.20), depression (0.96, 0.75 to 1.22), irritability (0.98, 0.81 to 1.17), aggression (0.91, 0.52 to 1.59), or death (1.05, 0.47 to 2.38) in the varenicline users compared with placebo users. Varenicline was associated with an increased risk of sleep disorders (1.63, 1.29 to 2.07), insomnia (1.56, 1.36 to 1.78), abnormal dreams (2.38, 2.05 to 2.77), and fatigue (1.28, 1.06 to 1.55) but a reduced risk of anxiety (0.75, 0.61 to 0.93). Similar findings were observed when risk differences were reported. There was no evidence for a variation in depression and suicidal ideation by age group, sex, ethnicity, smoking status, presence or absence of psychiatric illness, and type of study sponsor (that is, pharmaceutical industry or other). Conclusions This meta-analysis found no evidence of an increased risk of suicide or attempted suicide, suicidal ideation, depression, or death with varenicline. These findings provide some reassurance for users and prescribers regarding the neuropsychiatric safety of varenicline. There was evidence that varenicline was associated with a higher risk of sleep problems such as insomnia and abnormal dreams. These side effects, however,are already well recognised. Systematic review registration PROSPERO 2014:CRD42014009224
    corecore