207 research outputs found
Adjusting for multiple prognostic factors in the analysis of randomised trials
Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method.
Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome.
Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power.
Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme
scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large
sample sizes, however treating strata as random effects should be the analysis method of choice with binary or
time-to-event outcomes and a small sample size
Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes
<p>Abstract</p> <p>Background</p> <p>Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models.</p> <p>Methods</p> <p>We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.</p> <p>Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted.</p> <p>Results</p> <p>The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient.</p> <p>Conclusions</p> <p>On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.</p
Imaging findings in noncraniofacial childhood rhabdomyosarcoma
Rhabdomyosarcoma (RMS) is the most common soft-tissue sarcoma of childhood. This paper is focuses on imaging for diagnosis, staging, and follow-up of noncraniofacial RMS
The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies
Adjustment for prognostic covariates can lead to increased power in the analysis of randomized trials. However, adjusted analyses are not often performed in practice
Replication Pauses of the Wild-Type and Mutant Mitochondrial DNA Polymerase Gamma: A Simulation Study
The activity of polymerase γ is complicated, involving both correct and incorrect DNA polymerization events, exonuclease activity, and the disassociation of the polymerase:DNA complex. Pausing of pol-γ might increase the chance of deletion and depletion of mitochondrial DNA. We have developed a stochastic simulation of pol-γ that models its activities on the level of individual nucleotides for the replication of mtDNA. This method gives us insights into the pausing of two pol-γ variants: the A467T substitution that causes PEO and Alpers syndrome, and the exonuclease deficient pol-γ (exo−) in premature aging mouse models. To measure the pausing, we analyzed simulation results for the longest time for the polymerase to move forward one nucleotide along the DNA strand. Our model of the exo− polymerase had extremely long pauses, with a 30 to 300-fold increase in the time required for the longest single forward step compared to the wild-type, while the naturally occurring A467T variant showed at most a doubling in the length of the pauses compared to the wild-type. We identified the cause of these differences in the polymerase pausing time to be the number of disassociations occurring in each forward step of the polymerase
Influence of a montmorency cherry juice blend on indices of exercise-induced stress and upper respiratory tract symptoms following marathon running—a pilot investigation
Background: Prolonged exercise, such as marathon running, has been associated with an increase in respiratory mucosal inflammation. The aim of this pilot study was to examine the effects of Montmorency cherry juice on markers of stress, immunity and inflammation following a Marathon.
Methods: Twenty recreational Marathon runners consumed either cherry juice (CJ) or placebo (PL) before and after a Marathon race. Markers of mucosal immunity secretory immunoglobulin A (sIgA), immunoglobulin G (IgG), salivary cortisol, inflammation (CRP) and self-reported incidence and severity of upper respiratory tract symptoms (URTS) were measured before and following the race.
Results: All variables except secretory IgA and IgG concentrations in saliva showed a significant time effect (P < 0.01). Serum CRP showed a significant interaction and treatment effect (P < 0.01). The CRP increase at 24 and 48 h post-Marathon was lower (P < 0.01) in the CJ group compared to PL group. Mucosal immunity and salivary cortisol showed no interaction effect or treatment effect. The incidence and severity of URTS was significantly greater than baseline at 24 h and 48 h following the race in the PL group and was also greater than the CJ group (P < 0.05). No URTS were reported in the CJ group whereas 50 % of runners in the PL group reported URTS at 24 h and 48 h post-Marathon.
Conclusions: This is the first study that provides encouraging evidence of the potential role of Montmorency cherries in reducing the development of URTS post-Marathon possibly caused by exercise-induced hyperventilation trauma, and/or other infectious and non-infectious factors
A novel outbred mouse model of 2009 pandemic influenza and bacterial co-infection severity
Influenza viruses pose a significant health risk and annually impose a great cost to patients and the health care system. The molecular determinants of influenza severity, often exacerbated by secondary bacterial infection, are largely unclear. We generated a novel outbred mouse model of influenza virus, Staphylococcus aureus, and coinfection utilizing influenza A/CA/07/2009 virus and S. aureus (USA300). Outbred mice displayed a wide range of pathologic phenotypes following influenza virus or co-infection ranging broadly in severity. Influenza viral burden positively correlated with weight loss although lung histopathology did not. Inflammatory cytokines including IL-6, TNF-α, G-CSF, and CXCL10 positively correlated with both weight loss and viral burden. In S. aureus infection, IL-1β, G-CSF, TNF-α, and IL-6 positively correlated with weight loss and bacterial burden. In co-infection, IL-1β production correlated with decreased weight loss suggesting a protective role. The data demonstrate an approach to identify biomarkers of severe disease and to understand pathogenic mechanisms in pneumonia. © 2013 McHugh et al
Formation of a morphine-conditioned place preference does not change the size of evoked potentials in the ventral hippocampus–nucleus accumbens projection
Abstract In opioid addiction, cues and contexts associated with drug reward can be powerful triggers for drug craving and relapse. The synapses linking ventral hippocampal outputs to medium spiny neurons of the accumbens may be key sites for the formation and storage of associations between place or context and reward, both drug-related and natural. To assess this, we implanted rats with electrodes in the accumbens shell to record synaptic potentials evoked by electrical stimulation of the ventral hippocampus, as well as continuous local-field-potential activity. Rats then underwent morphine-induced (10 mg/kg) conditioned-place-preference training, followed by extinction. Morphine caused an acute increase in the slope and amplitude of accumbens evoked responses, but no long-term changes were evident after conditioning or extinction of the place preference, suggesting that the formation of this type of memory does not lead to a net change in synaptic strength in the ventral hippocampal output to the accumbens. However, analysis of the local field potential revealed a marked sensitization of theta- and high-gamma-frequency activity with repeated morphine administration. This phenomenon may be linked to the behavioral changes—such as psychomotor sensitization and the development of drug craving—that are associated with chronic use of addictive drugs
Recommended from our members
Evaluation of fast atmospheric dispersion models in a regular street network
The need to balance computational speed and simulation accuracy is a key challenge in designing atmospheric dispersion models that can be used in scenarios where near real-time hazard predictions are needed. This challenge is aggravated in cities, where models need to have some degree of building-awareness, alongside the ability to capture effects of dominant urban flow processes. We use a combination of high-resolution large-eddy simulation (LES) and wind-tunnel data of flow and dispersion in an idealised, equal-height urban canopy to highlight important dispersion processes and evaluate how these are reproduced by representatives of the most prevalent modelling approaches: (i) a Gaussian plume model, (ii) a Lagrangian stochastic model and (iii) street-network dispersion models. Concentration data from the LES, validated against the wind-tunnel data, were averaged over the volumes of streets in order to provide a high-fidelity reference suitable for evaluating the different models on the same footing. For the particular combination of forcing wind direction and source location studied here, the strongest deviations from the LES reference were associated with mean over-predictions of concentrations by approximately a factor of 2 and with a relative scatter larger than a factor of 4 of the mean, corresponding to cases where the mean plume centreline also deviated significantly from the LES. This was linked to low accuracy of the underlying flow models/parameters that resulted in a misrepresentation of pollutant channelling along streets and of the uneven plume branching observed in intersections. The agreement of model predictions with the LES (which explicitly resolves the turbulent flow and dispersion processes) greatly improved by increasing the accuracy of building-induced modifications of the driving flow field. When provided with a limited set of representative velocity parameters, the comparatively simple street-network models performed equally well or better compared to the Lagrangian model run on full 3D wind fields. The study showed that street-network models capture the dominant building-induced dispersion processes in the canopy layer through parametrisations of horizontal advection and vertical exchange processes at scales of practical interest. At the same time, computational costs and computing times associated with the network approach are ideally suited for emergency-response applications
Parallel Computational Subunits in Dentate Granule Cells Generate Multiple Place Fields
A fundamental question in understanding neuronal computations is how dendritic events influence the output of the neuron. Different forms of integration of neighbouring and distributed synaptic inputs, isolated dendritic spikes and local regulation of synaptic efficacy suggest that individual dendritic branches may function as independent computational subunits. In the present paper, we study how these local computations influence the output of the neuron. Using a simple cascade model, we demonstrate that triggering somatic firing by a relatively small dendritic branch requires the amplification of local events by dendritic spiking and synaptic plasticity. The moderately branching dendritic tree of granule cells seems optimal for this computation since larger dendritic trees favor local plasticity by isolating dendritic compartments, while reliable detection of individual dendritic spikes in the soma requires a low branch number. Finally, we demonstrate that these parallel dendritic computations could contribute to the generation of multiple independent place fields of hippocampal granule cells
- …