3,160 research outputs found
Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ.
Meta-analysis using individual participant data (IPD) obtains and synthesises the raw, participant-level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta-analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual-level interactions, such as treatment-effect modifiers. There are two statistical approaches for conducting an IPD meta-analysis: one-stage and two-stage. The one-stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two-stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta-analysis model. There have been numerous comparisons of the one-stage and two-stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one-stage and two-stage IPD meta-analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one-stage or two-stage itself. We illustrate the concepts with recently published IPD meta-analyses, summarise key statistical software and provide recommendations for future IPD meta-analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd
Deriving percentage study weights in multi-parameter meta-analysis models: with application to meta-regression, network meta-analysis and one-stage individual participant data models.
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis)
Simulation-based power calculations for planning a two-stage individual participant data meta-analysis
BACKGROUND
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions.
METHODS
The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value.
RESULTS
In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has <â60% power to detect a reduction of 1Â kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials.
CONCLUSIONS
Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely
Association of maternal serum PAPP-A levels, nuchal translucency and crown rump length in first trimester with adverse pregnancy outcomes: Retrospective cohort study.
OBJECTIVE: Are first trimester serum pregnancy-associated plasma protein-A (PAPP-A), nuchal translucency (NT) and crown rump length (CRL) prognostic factors for adverse pregnancy outcomes? METHOD: Retrospective cohort women, singleton pregnancies (UK 2011-2015). Unadjusted and multivariable logistic regression, outcomes: small for gestational age (SGA), pre-eclampsia (PE), pre-term birth (PTB), miscarriage, stillbirth, perinatal mortality and neonatal death (NND). RESULTS: 12,592 pregnancies: 852 (6.8%) PTB, 352 (2.8%) PE, 1824 (14.5%) SGA, 73 (0.6%) miscarriages, 37(0.3%) stillbirths, 73 perinatal deaths (0.6%) and 38 (0.30%) NND. Multivariable analysis: lower odds of SGA [adjusted odds ratio (aOR) 0.88 (95% CI 0.85,0.91)], PTB [0.92 (95%CI 0.88,0.97)], PE [0.91 (95% CI 0.85,0.97)] and stillbirth [ 0.71 (95% CI 0.52,0.98)] as PAPP-A increases. Lower odds of SGA [aOR 0.79 (95% CI 0.70,0.89)] but higher odds of miscarriage [aOR 1.75 95% CI (1.12,2.72)] as NT increases, and lower odds of stillbirth as CRL increases [aOR 0.94 95% CI (0.89,0.99)]. Multivariable analysis of three factors together demonstrated strong associations: a) PAPP-A, NT, CRL and SGA, b) PAPP-A and PTB, c) PAPP-A, CRL and PE, d) NT and miscarriage. CONCLUSIONS: PAPP-A, NT and CRL independent prognostic factor for adverse pregnancy outcomes, especially PAPP-A and SGA with lower PAPP-A associated with increased risk
Perioperative supplementation with a fruit and vegetable juice powder concentrate and postsurgical morbidity: a double-blind, randomised, placebo controlled clinical trial
Aims
Surgical trauma leads to an inflammatory response that causes surgical morbidity. Reduced antioxidant micronutrient (AM)a levels and/or excessive levels of Reactive Oxygen Species (ROS)b have previously been linked to delayed wound healing and presence of chronic wounds. We aimed to evaluate the effect of pre-operative supplementation with encapsulated fruit and vegetable juice powder concentrate (JuicePlus+Âź) on postoperative morbidity and Quality of Life (QoL)c.
Methods
We conducted a randomised, double-blind, placebo-controlled two-arm parallel clinical trial evaluating postoperative morbidity following lower third molar surgery. Patients aged between 18 and 65 years were randomised to take verum or placebo for 10 weeks prior to surgery and during the first postoperative week. The primary endpoint was the between-group difference in QoL over the first postoperative week, with secondary endpoints being related to other measures of postoperative morbidity (pain and trismus).
Results
One-hundred and eighty-three out of 238 randomised patients received surgery (Intention-To-Treat population). Postoperative QoL tended to be higher in the active compared to the placebo group (p=0.059). Furthermore, reduction in mouth opening 2 days after surgery was 3.1 mm smaller (p=0.042), the mean pain score over the postoperative week was 9.4 mm lower (p=0.007) and patients were less likely to experience moderate to severe pain on postoperative day 2 (RR 0.58, p=0.030), comparing verum to placebo groups.
Conclusion
Pre-operative supplementation with a fruit and vegetable supplement rich in AM may improve postoperative QoL and reduce surgical morbidity and post-operative complications after surgery
Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation
In this paper, we present the optimization formulation of the Kalman
filtering and smoothing problems, and use this perspective to develop a variety
of extensions and applications. We first formulate classic Kalman smoothing as
a least squares problem, highlight special structure, and show that the classic
filtering and smoothing algorithms are equivalent to a particular algorithm for
solving this problem. Once this equivalence is established, we present
extensions of Kalman smoothing to systems with nonlinear process and
measurement models, systems with linear and nonlinear inequality constraints,
systems with outliers in the measurements or sudden changes in the state, and
systems where the sparsity of the state sequence must be accounted for. All
extensions preserve the computational efficiency of the classic algorithms, and
most of the extensions are illustrated with numerical examples, which are part
of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure
Altering fatty acid availability does not impair prolonged, continuous running to fatigue: evidence for carbohydrate dependence
We determined the effect of suppressing lipolysis via administration of nicotinic acid (NA) on fuel substrate selection and half-marathon running capacity. In a single-blinded, Latin square design, 12 competitive runners completed four trials involving treadmill running until volitional fatigue at a pace based on 95% of personal best half-marathon time. Trials were completed in a fed or overnight fasted state: 1) carbohydrate (CHO) ingestion before (2 g CHO·kgâ1·body massâ1) and during (44 g/h) [CFED]; 2) CFED plus NA ingestion [CFED-NA]; 3) fasted with placebo ingestion during [FAST]; and 4) FAST plus NA ingestion [FAST-NA]. There was no difference in running distance (CFED, 21.53 ± 1.07; CFED-NA, 21.29 ± 1.69; FAST, 20.60 ± 2.09; FAST-NA, 20.11 ± 1.71 km) or time to fatigue between the four trials. Concentrations of plasma free fatty acids (FFA) and glycerol were suppressed following NA ingestion irrespective of preexercise nutritional intake but were higher throughout exercise in FAST compared with all other trials (P < 0.05). Rates of whole-body CHO oxidation were unaffected by NA ingestion in the CFED and FAST trials, but were lower in the FAST trial compared with the CFED-NA trial (P < 0.05). CHO was the primary substrate for exercise in all conditions, contributing 83-91% to total energy expenditure with only a small contribution from fat-based fuels. Blunting the exercise-induced increase in FFA via NA ingestion did not impair intense running capacity lasting âŒ85 min, nor did it alter patterns of substrate oxidation in competitive athletes. Although there was a small but obligatory use of fat-based fuels, the oxidation of CHO-based fuels predominates during half-marathon running
Automatic extraction of candidate nomenclature terms using the doublet method
BACKGROUND: New terminology continuously enters the biomedical literature. How can curators identify new terms that can be added to existing nomenclatures? The most direct method, and one that has served well, involves reading the current literature. The scholarly curator adds new terms as they are encountered. Present-day scholars are severely challenged by the enormous volume of biomedical literature. Curators of medical nomenclatures need computational assistance if they hope to keep their terminologies current. The purpose of this paper is to describe a method of rapidly extracting new, candidate terms from huge volumes of biomedical text. The resulting lists of terms can be quickly reviewed by curators and added to nomenclatures, if appropriate. The candidate term extractor uses a variation of the previously described doublet coding method. The algorithm, which operates on virtually any nomenclature, derives from the observation that most terms within a knowledge domain are composed entirely of word combinations found in other terms from the same knowledge domain. Terms can be expressed as sequences of overlapping word doublets that have more specific meaning than the individual words that compose the term. The algorithm parses through text, finding contiguous sequences of word doublets that are known to occur somewhere in the reference nomenclature. When a sequence of matching word doublets is encountered, it is compared with whole terms already included in the nomenclature. If the doublet sequence is not already in the nomenclature, it is extracted as a candidate new term. Candidate new terms can be reviewed by a curator to determine if they should be added to the nomenclature. An implementation of the algorithm is demonstrated, using a corpus of published abstracts obtained through the National Library of Medicine's PubMed query service and using "The developmental lineage classification and taxonomy of neoplasms" as a reference nomenclature. RESULTS: A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms) that could be added to the reference nomenclature. CONCLUSION: The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article
- âŠ