7 research outputs found

    Comparison of commonly used methods in random effects meta-analysis:Application to preclinical data in drug discovery research

    Get PDF
    Background Meta-analysis of preclinical data is used to evaluate the consistency of findings and to inform the design and conduct of future studies. Unlike clinical meta-analysis, preclinical data often involve many heterogeneous studies reporting outcomes from a small number of animals. Here, we review the methodological challenges in preclinical meta-analysis in estimating and explaining heterogeneity in treatment effects.Methods Assuming aggregate-level data, we focus on two topics: (1) estimation of heterogeneity using commonly used methods in preclinical meta-analysis: method of moments (DerSimonian and Laird; DL), maximum likelihood (restricted maximum likelihood; REML) and Bayesian approach; (2) comparison of univariate versus multivariable meta-regression for adjusting estimated treatment effects for heterogeneity. Using data from a systematic review on the efficacy of interleukin-1 receptor antagonist in animals with stroke, we compare these methods, and explore the impact of multiple covariates on the treatment effects.Results We observed that the three methods for estimating heterogeneity yielded similar estimates for the overall effect, but different estimates for between-study variability. The proportion of heterogeneity explained by a covariate is estimated larger using REML and the Bayesian method as compared with DL. Multivariable meta-regression explains more heterogeneity than univariate meta-regression.Conclusions Our findings highlight the importance of careful selection of the estimation method and the use of multivariable meta-regression to explain heterogeneity. There was no difference between REML and the Bayesian method and both methods are recommended over DL. Multiple meta-regression is worthwhile to explain heterogeneity by more than one variable, reducing more variability than any univariate models and increasing the explained proportion of heterogeneity

    EviAtlas : a tool for visualising evidence synthesis databases

    Get PDF
    Abstract: Systematic mapping assesses the nature of an evidence base, answering how much evidence exists on a particular topic. Perhaps the most useful outputs of a systematic map are an interactive database of studies and their meta-data, along with visualisations of this database. Despite the rapid increase in systematic mapping as an evidence synthesis method, there is currently a lack of Open Source software for producing interactive visualisations of systematic map databases. In April 2018, as attendees at and coordinators of the first ever Evidence Synthesis Hackathon in Stockholm, we decided to address this issue by developing an R-based tool called EviAtlas, an Open Access (i.e. free to use) and Open Source (i.e. software code is freely accessible and reproducible) tool for producing interactive, attractive tables and figures that summarise the evidence base. Here, we present our tool which includes the ability to generate vital visualisations for systematic maps and reviews as follows: a complete data table; a spatially explicit geographical information system (Evidence Atlas); Heat Maps that cross-tabulate two or more variables and display the number of studies belonging to multiple categories; and standard descriptive plots showing the nature of the evidence base, for example the number of studies published per year or number of studies per country. We believe that EviAtlas will provide a stimulus for the development of other exciting tools to facilitate evidence synthesis

    Animal models of chemotherapy-induced peripheral neuropathy:a machine-assisted systematic review and meta-analysis

    Get PDF
    <div><p>We report a systematic review and meta-analysis of research using animal models of chemotherapy-induced peripheral neuropathy (CIPN). We systematically searched 5 online databases in September 2012 and updated the search in November 2015 using machine learning and text mining to reduce the screening for inclusion workload and improve accuracy. For each comparison, we calculated a standardised mean difference (SMD) effect size, and then combined effects in a random-effects meta-analysis. We assessed the impact of study design factors and reporting of measures to reduce risks of bias. We present power analyses for the most frequently reported behavioural tests; 337 publications were included. Most studies (84%) used male animals only. The most frequently reported outcome measure was evoked limb withdrawal in response to mechanical monofilaments. There was modest reporting of measures to reduce risks of bias. The number of animals required to obtain 80% power with a significance level of 0.05 varied substantially across behavioural tests. In this comprehensive summary of the use of animal models of CIPN, we have identified areas in which the value of preclinical CIPN studies might be increased. Using both sexes of animals in the modelling of CIPN, ensuring that outcome measures align with those most relevant in the clinic, and the animal’s pain contextualised ethology will likely improve external validity. Measures to reduce risk of bias should be employed to increase the internal validity of studies. Different outcome measures have different statistical power, and this can refine our approaches in the modelling of CIPN.</p></div

    Building a Systematic Online Living Evidence Summary of COVID-19 Research

    Get PDF
    Throughout the global coronavirus pandemic, we have seen an unprecedented volume of COVID-19 researchpublications. This vast body of evidence continues to grow, making it difficult for research users to keep up with the pace of evolving research findings. To enable the synthesis of this evidence for timely use by researchers, policymakers, and other stakeholders, we developed an automated workflow to collect, categorise, and visualise the evidence from primary COVID-19 research studies. We trained a crowd of volunteer reviewers to annotate studies by relevance to COVID-19, study objectives, and methodological approaches. Using these human decisions, we are training machine learning classifiers and applying text-mining tools to continually categorise the findings and evaluate the quality of COVID-19 evidence

    Advanced statistical methods in meta-analysis with applications in preclinical neurological drug discovery data

    No full text
    Conventional drug development generally starts with laboratory studies in animals, which can be used prior to testing safety and efficacy in humans to justify potential costs and risks. In the neurosciences, this has been characterised by substantial efficacy observed in animal studies that do not translate to similar efficacy in the clinic. This “translational failure” may in part be due to poor design and analysis of animal studies. Treatment effects are highly homogeneous within animal studies and highly heterogeneous between animal studies. The opposite is true in clinical trials. Meta-analysis of preclinical data is used to understand the sources of heterogeneity in experimental findings and involves different statistical issues from those relating to clinical meta-analysis. In this thesis, I investigate the methodological challenges of preclinical meta-analysis for estimating and explaining heterogeneity based on neurological drug discovery data. In the first part of the thesis, assuming aggregate level data for a continuous outcome, I present a comprehensive summary and comparison of the most common methods for estimating and quantifying heterogeneity in meta-analysis of preclinical data. I focus on two topics: (1) estimation of heterogeneity using method of moments (Dersimonian-Laird, DL), maximum likelihood (REML) and a Bayesian approach; and (2) comparison of univariate versus multivariable meta-regression for adjusting heterogeneity in treatment effects between studies. My findings indicate no difference between REML and the Bayesian method, and both are recommended over DL. Moreover, I show that multiple meta-regression is worthwhile to explain heterogeneity, reducing more variability than any univariate models and increasing the explained proportion of heterogeneity. For further understanding heterogeneity in animal studies, and to evaluate the sufficiency of current preclinical evidence for translation, I empirically investigate how measures of heterogeneity from meta-analyses of animal studies change as evidence accumulates. I explore how heterogeneity measures change with the inclusion of more studies using cumulative meta-analyses and cumulative meta-regression of seven systematic review datasets of varying sizes. The preliminary findings suggest that it may be possible to identify systematic characteristics of heterogeneity within preclinical datasets which can be used alongside other measures to guide decisions to proceed with human testing. The second part of this thesis focuses on investigating the approaches for quantifying heterogeneity using individual animal data meta-analysis. Individual data meta-analysis is the gold standard for synthesising evidence in clinical trials as it allows detailed data checking and exploration of factors contributing to heterogeneity. However, it is rarely undertaken due to limited access to original data. To explore potential benefits of individual data meta-analysis in preclinical setting, under a Bayesian framework, I examine the relationship between individual and aggregate level meta-analysis in quantifying heterogeneity based on a general linear mixed-effects model methodology and consider the impact of distributional assumptions. My findings highlight that despite providing similar results as the aggregate level analysis, individual level analysis offers more flexibility to explore model assumptions, implement different distributions and explore potential effect modifiers at the animal level. Additionally, I illustrate the impact of the number of animals and studies and the magnitude of heterogeneity on the accuracy of estimates using simulations. Extending the exploration of methods in quantifying heterogeneity in meta-analysis with one endpoint, I present my work investigating the joint synthesis of multiple correlated endpoints that are repeatedly measured over a longitudinal profile, based on a multivariate longitudinal meta-analysis of individual animal data. Using linear mixed model methodology, I illustrate how the joint synthesis of multiple correlated endpoints can be applied on preclinical meta-analysis also accounting for longitudinal structure. I further discuss its advantages over separate univariate meta-analyses ignoring correlations

    EviAtlas: a&nbsp;tool for&nbsp;visualising evidence synthesis database

    No full text
    Systematic mapping assesses the nature of an evidence base, answering how much evidence exists on a particular topic. Perhaps the most useful outputs of a systematic map are an interactive database of studies and their meta‑data, along with visualisations of this database. Despite the rapid increase in systematic mapping as an evidence synthesis method, there is currently a lack of Open Source software for producing interactive visualisations of systematic map databases. In April 2018, as attendees at and coordinators of the first ever Evidence Synthesis Hackathon in Stock‑ holm, we decided to address this issue by developing an R‑based tool called EviAtlas, an Open Access (i.e. free to use) and Open Source (i.e. software code is freely accessible and reproducible) tool for producing interactive, attractive tables and figures that summarise the evidence base. Here, we present our tool which includes the ability to generate vital visualisations for systematic maps and reviews as follows: a complete data table; a spatially explicit geographical information system (Evidence Atlas); Heat Maps that cross‑tabulate two or more variables and display the number of studies belonging to multiple categories; and standard descriptive plots showing the nature of the evidence base, for example the number of studies published per year or number of studies per country. We believe that EviAtlas will provide a stimulus for the development of other exciting tools to facilitate evidence synthesi
    corecore