421 research outputs found
Michigan Motor Vehicle Service and Repair Act of 1974
This note will analyze the Michigan Motor Vehicle Service and Repair Act, examining the differences between it and prior Michigan and federal legislation. The new legislation will be compared with similar statutes in other states. Finally, the possible drawbacks of repair shop and mechanic certification programs will be discussed, and suggestions for improvements will be made
Exposure to ambient particulate matter is associated with accelerated functional decline in idiopathic pulmonary fibrosis
BACKGROUND:
Idiopathic pulmonary fibrosis (IPF), a progressive disease with an unknown pathogenesis, may be due in part to an abnormal response to injurious stimuli by alveolar epithelial cells. Air pollution and particulate inhalation of matter evoke a wide variety of pulmonary and systemic inflammatory diseases. We therefore hypothesized that increased average ambient particulate matter (PM) concentrations would be associated with an accelerated rate of decline in FVC in IPF.
METHODS:
We identified a cohort of subjects seen at a single university referral center from 2007 to 2013. Average concentrations of particulate matter < 10 and < 2.5 μg/m3 (PM10 and PM2.5, respectively) were assigned to each patient based on geocoded residential addresses. A linear multivariable mixed-effects model determined the association between the rate of decline in FVC and average PM concentration, controlling for baseline FVC at first measurement and other covariates.
RESULTS:
One hundred thirty-five subjects were included in the final analysis after exclusion of subjects missing repeated spirometry measurements and those for whom exposure data were not available. There was a significant association between PM10 levels and the rate of decline in FVC during the study period, with each μg/m3 increase in PM10 corresponding with an additional 46 cc/y decline in FVC (P = .008).
CONCLUSIONS:
Ambient air pollution, as measured by average PM10 concentration, is associated with an increase in the rate of decline of FVC in IPF, suggesting a potential mechanistic role for air pollution in the progression of disease
Adjusting for multiple prognostic factors in the analysis of randomised trials
Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method.
Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome.
Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power.
Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme
scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large
sample sizes, however treating strata as random effects should be the analysis method of choice with binary or
time-to-event outcomes and a small sample size
Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?
BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust ‘sandwich’ estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres
Prevention of haematoma progression by tranexamic acid in intracerebral haemorrhage patients with and without spot sign on admission scan: a statistical analysis plan of a pre-specified sub-study of the TICH-2 trial
Objective
We present the statistical analysis plan of a prespecified Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (TICH)-2 sub-study aiming to investigate, if tranexamic acid has a different effect in intracerebral haemorrhage patients with the spot sign on admission compared to spot sign negative patients. The TICH-2 trial recruited above 2000 participants with intracerebral haemorrhage arriving in hospital within 8 h after symptom onset. They were included irrespective of radiological signs of on-going haematoma expansion. Participants were randomised to tranexamic acid versus matching placebo. In this subgroup analysis, we will include all participants in TICH-2 with a computed tomography angiography on admission allowing adjudication of the participants’ spot sign status.
Results
Primary outcome will be the ability of tranexamic acid to limit absolute haematoma volume on computed tomography at 24 h (± 12 h) after randomisation among spot sign positive and spot sign negative participants, respectively. Within all outcome measures, the effect of tranexamic acid in spot sign positive/negative participants will be compared using tests of interaction. This sub-study will investigate the important clinical hypothesis that spot sign positive patients might benefit more from administration of tranexamic acid compared to spot sign negative patients
Assessing potential sources of clustering in individually randomised trials
Recent reviews have shown that while clustering is extremely common in individually randomised trials (for example, clustering within centre, therapist, or surgeon), it is rarely accounted for in the trial analysis. Our aim is to develop a general framework for assessing whether potential sources of clustering must be accounted for in the trial analysis to obtain valid type I error rates (non-ignorable clustering), with a particular focus on individually randomised trials
Multinational development and validation of an early prediction model for delirium in ICU patients
Rationale
Delirium incidence in intensive care unit (ICU) patients is high and associated with poor outcome. Identification of high-risk patients may facilitate its prevention.
Purpose
To develop and validate a model based on data available at ICU admission to predict delirium development during a patient’s complete ICU stay and to determine the predictive value of this model in relation to the time of delirium development.
Methods
Prospective cohort study in 13 ICUs from seven countries. Multiple logistic regression analysis was used to develop the early prediction (E-PRE-DELIRIC) model on data of the first two-thirds and validated on data of the last one-third of the patients from every participating ICU.
Results
In total, 2914 patients were included. Delirium incidence was 23.6 %. The E-PRE-DELIRIC model consists of nine predictors assessed at ICU admission: age, history of cognitive impairment, history of alcohol abuse, blood urea nitrogen, admission category, urgent admission, mean arterial blood pressure, use of corticosteroids, and respiratory failure. The area under the receiver operating characteristic curve (AUROC) was 0.76 [95 % confidence interval (CI) 0.73–0.77] in the development dataset and 0.75 (95 % CI 0.71–0.79) in the validation dataset. The model was well calibrated. AUROC increased from 0.70 (95 % CI 0.67–0.74), for delirium that developed 6 days.
Conclusion
Patients’ delirium risk for the complete ICU length of stay can be predicted at admission using the E-PRE-DELIRIC model, allowing early preventive interventions aimed to reduce incidence and severity of ICU delirium
Screening for data clustering in multicenter studies: the residual intraclass correlation
status: publishe
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research
Shared decision making and behavioral impairment: a national study among children with special health care needs
BACKGROUND: The Institute of Medicine has prioritized shared decision making (SDM), yet little is known about the impact of SDM over time on behavioral outcomes for children. This study examined the longitudinal association of SDM with behavioral impairment among children with special health care needs (CSHCN). METHOD: CSHCN aged 5-17 years in the 2002-2006 Medical Expenditure Panel Survey were followed for 2 years. The validated Columbia Impairment Scale measured impairment. SDM was measured with 7 items addressing the 4 components of SDM. The main exposures were (1) the mean level of SDM across the 2 study years and (2) the change in SDM over the 2 years. Using linear regression, we measured the association of SDM and behavioral impairment. RESULTS: Among 2,454 subjects representing 10.2 million CSHCN, SDM increased among 37% of the population, decreased among 36% and remained unchanged among 27%. For CSHCN impaired at baseline, the change in SDM was significant with each 1-point increase in SDM over time associated with a 2-point decrease in impairment (95% CI: 0.5, 3.4), whereas the mean level of SDM was not associated with impairment. In contrast, among those below the impairment threshold, the mean level of SDM was significant with each one point increase in the mean level of SDM associated with a 1.1-point decrease in impairment (0.4, 1.7), but the change was not associated with impairment. CONCLUSION: Although the change in SDM may be more important for children with behavioral impairment and the mean level over time for those below the impairment threshold, results suggest that both the change in SDM and the mean level may impact behavioral health for CSHCN
- …
