487 research outputs found

    Selection of confounding variables should not be based on observed associations with exposure

    Get PDF
    In observational studies, selection of confounding variables for adjustment is often based on observed baseline incomparability. The aim of this study was to evaluate this selection strategy. We used clinical data on the effects of inhaled long-acting beta-agonist (LABA) use on the risk of mortality among patients with obstructive pulmonary disease to illustrate the impact of selection of confounding variables for adjustment based on baseline comparisons. Among 2,394 asthma and COPD patients included in the analyses, the LABA ever-users were considerably older than never-users, but cardiovascular co-morbidity was equally prevalent (19.9% vs. 19.9%). Adjustment for cardiovascular co-morbidity status did not affect the crude risk ratio (RR) for mortality: crude RR 1.19 (95% CI 0.93–1.51) versus RR 1.19 (95% CI 0.94–1.50) after adjustment for cardiovascular co-morbidity. However, after adjustment for age (RR 0.95, 95% CI 0.76–1.19), additional adjustment for cardiovascular co-morbidity status did affect the association between LABA use and mortality (RR 1.01, 95% CI 0.80–1.26). Confounding variables should not be discarded based on balanced distributions among exposure groups, because residual confounding due to the omission of confounding variables from the adjustment model can be relevant

    Simple estimators of the intensity of seasonal occurrence

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Edwards's method is a widely used approach for fitting a sine curve to a time-series of monthly frequencies. From this fitted curve, estimates of the seasonal intensity of occurrence (i.e., peak-to-low ratio of the fitted curve) can be generated.</p> <p>Methods</p> <p>We discuss various approaches to the estimation of seasonal intensity assuming Edwards's periodic model, including maximum likelihood estimation (MLE), least squares, weighted least squares, and a new closed-form estimator based on a second-order moment statistic and non-transformed data. Through an extensive Monte Carlo simulation study, we compare the finite sample performance characteristics of the estimators discussed in this paper. Finally, all estimators and confidence interval procedures discussed are compared in a re-analysis of data on the seasonality of monocytic leukemia.</p> <p>Results</p> <p>We find that Edwards's estimator is substantially biased, particularly for small numbers of events and very large or small amounts of seasonality. For the common setting of rare events and moderate seasonality, the new estimator proposed in this paper yields less finite sample bias and better mean squared error than either the MLE or weighted least squares. For large studies and strong seasonality, MLE or weighted least squares appears to be the optimal analytic method among those considered.</p> <p>Conclusion</p> <p>Edwards's estimator of the seasonal relative risk can exhibit substantial finite sample bias. The alternative estimators considered in this paper should be preferred.</p

    Low dose radiation and cancer in A-bomb survivors: latency and non-linear dose-response in the 1950–90 mortality cohort

    Get PDF
    BACKGROUND: Analyses of Japanese A-bomb survivors' cancer mortality risks are used to establish recommended annual dose limits, currently set at 1 mSv (public) and 20 mSv (occupational). Do radiation doses below 20 mSv have significant impact on cancer mortality in Japanese A-bomb survivors, and is the dose-response linear? METHODS: I analyse stomach, liver, lung, colon, uterus, and all-solid cancer mortality in the 0 – 20 mSv colon dose subcohort of the 1950–90 (grouped) mortality cohort, by Poisson regression using a time-lagged colon dose to detect latency, while controlling for gender, attained age, and age-at-exposure. I compare linear and non-linear models, including one adapted from the cellular bystander effect for α particles. RESULTS: With a lagged linear model, Excess Relative Risk (ERR) for the liver and all-solid cancers is significantly positive and several orders of magnitude above extrapolations from the Life Span Study Report 12 analysis of the full cohort. Non-linear models are strongly superior to the linear model for the stomach (latency 11.89 years), liver (36.90), lung (13.60) and all-solid (43.86) in fitting the 0 – 20 mSv data and show significant positive ERR at 0.25 mSv and 10 mSv lagged dose. The slope of the dose-response near zero is several orders of magnitude above the slope at high doses. CONCLUSION: The standard linear model applied to the full 1950–90 cohort greatly underestimates the risks at low doses, which are significant when the 0 – 20 mSv subcohort is modelled with latency. Non-linear models give a much better fit and are compatible with a bystander effect

    A primary care, multi-disciplinary disease management program for opioid-treated patients with chronic non-cancer pain and a high burden of psychiatric comorbidity

    Get PDF
    BACKGROUND: Chronic non-cancer pain is a common problem that is often accompanied by psychiatric comorbidity and disability. The effectiveness of a multi-disciplinary pain management program was tested in a 3 month before and after trial. METHODS: Providers in an academic general medicine clinic referred patients with chronic non-cancer pain for participation in a program that combined the skills of internists, clinical pharmacists, and a psychiatrist. Patients were either receiving opioids or being considered for opioid therapy. The intervention consisted of structured clinical assessments, monthly follow-up, pain contracts, medication titration, and psychiatric consultation. Pain, mood, and function were assessed at baseline and 3 months using the Brief Pain Inventory (BPI), the Center for Epidemiological Studies-Depression Scale scale (CESD) and the Pain Disability Index (PDI). Patients were monitored for substance misuse. RESULTS: Eighty-five patients were enrolled. Mean age was 51 years, 60% were male, 78% were Caucasian, and 93% were receiving opioids. Baseline average pain was 6.5 on an 11 point scale. The average CESD score was 24.0, and the mean PDI score was 47.0. Sixty-three patients (73%) completed 3 month follow-up. Fifteen withdrew from the program after identification of substance misuse. Among those completing 3 month follow-up, the average pain score improved to 5.5 (p = 0.003). The mean PDI score improved to 39.3 (p < 0.001). Mean CESD score was reduced to 18.0 (p < 0.001), and the proportion of depressed patients fell from 79% to 54% (p = 0.003). Substance misuse was identified in 27 patients (32%). CONCLUSIONS: A primary care disease management program improved pain, depression, and disability scores over three months in a cohort of opioid-treated patients with chronic non-cancer pain. Substance misuse and depression were common, and many patients who had substance misuse identified left the program when they were no longer prescribed opioids. Effective care of patients with chronic pain should include rigorous assessment and treatment of these comorbid disorders and intensive efforts to insure follow up

    Anticipated impact of the 2009 Four Corners raid and arrests

    Full text link
    Archaeological looting on United States federal land has been illegal for over a century. Regardless, the activity has continued in the Four Corners region. This paper discusses how the 1979 Archaeological Resources Protection Act (ARPA) can be viewed as sumptuary law, and within a sumptuary context, subversion can be anticipated. An analysis of 1986 and June 2009 federal raids in the Four Corners will exemplify this point by identifying local discourses found in newspapers both before and after each raid, which demonstrate a sumptuary effect. Ultimately, this paper concludes that looting just adapted, rather than halted, after each federal raid and that understanding this social context of continued local justification and validation of illegal digging is a potential asset for cultural resource protection

    Literacy and blood pressure – do healthcare systems influence this relationship? A cross-sectional study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Limited literacy is common among patients with chronic conditions and is associated with poor health outcomes. We sought to determine the association between literacy and blood pressure in primary care patients with hypertension and to determine if this relationship was consistent across distinct systems of healthcare delivery.</p> <p>Methods</p> <p>We conducted a cross-sectional study of 1224 patients with hypertension utilizing baseline data from two separate, but similar randomized controlled trials. Patients were enrolled from primary care clinics in the Veterans Affairs healthcare system (VAHS) and a university healthcare system (UHS) in Durham, North Carolina. We compared the association between literacy and the primary outcome systolic blood pressure (SBP) and secondary outcomes of diastolic blood pressure (DBP) and blood pressure (BP) control across the two different healthcare systems.</p> <p>Results</p> <p>Patients who read below a 9<sup>th </sup>grade level comprised 38.4% of patients in the VAHS and 27.5% of the patients in the UHS. There was a significant interaction between literacy and healthcare system for SBP. In adjusted analyses, SBP for patients with limited literacy was 1.2 mmHg lower than patients with adequate literacy in the VAHS (95% CI, -4.8 to 2.3), but 6.1 mmHg higher than patients with adequate literacy in the UHS (95% CI, 2.1 to 10.1); (p = 0.003 for test of interaction). This literacy by healthcare system interaction was not statistically significant for DBP or BP control.</p> <p>Conclusion</p> <p>The relationship between patient literacy and systolic blood pressure varied significantly across different models of healthcare delivery. The attributes of the healthcare delivery system may influence the relationship between literacy and health outcomes.</p

    Knowledge of ghostwriting and financial conflicts-of-interest reduces the perceived credibility of biomedical research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While the impact of conflicts-of-interest (COI) is of increasing concern in academic medicine, there is little research on the reaction of practicing clinicians to the disclosure of such conflicts. We developed two research vignettes presenting a fictional antidepressant medication study, one in which the principal investigator had no COI and another in which there were multiple COI disclosed. We confirmed the face validity of the COI vignette through consultation with experts. Hospital-based clinicians were randomly assigned to read one of these two vignettes and then administered a credibility scale.</p> <p>Findings</p> <p>Perceived credibility ratings were much lower in the COI group, with a difference of 11.00 points (31.42%) on the credibility scale total as calculated through the Mann-Whitney U test (95% CI = 6.99 - 15.00, <it>p </it>< .001). Clinicians in the COI group were also less likely to recommend the antidepressant medication discussed in the vignette (Odds Ratio = 0.163, 95% CI = .03 = 0.875).</p> <p>Conclusions</p> <p>In this study, increased disclosure of COI resulted in lower credibility ratings.</p

    Dental general anaesthetic receipt among Australians aged 15+ years, 1998–1999 to 2004–2005

    Get PDF
    Background Adults receive dental general anaesthetic (DGA) care when standard dental treatment is not possible. Receipt of DGA care is resource-intensive and not without risk. This study explores DGA receipt among 15+-year-old Australians by a range of risk indicators. Methods DGA data were obtained from Australia's Hospital Morbidity Database from 1998–1999 to 2004–2005. Poisson regression modeling was used to examine DGA rates in relation to age, sex, Indigenous status, location and procedure. Results The overall DGA rate was 472.79 per 100,000 (95% CI 471.50–474.09). Treatment of impacted teeth (63.7%) was the most common reason for DGA receipt, followed by dental caries treatment (12.4%), although marked variations were seen by age-group. After adjusting for other covariates, DGA rates among 15–19-year-olds were 13.20 (95% CI 12.65–13.78) times higher than their 85+-year-old counterparts. Females had 1.46 (95% CI 1.45–1.47) times the rate of their male counterparts, while those living in rural/remote areas had 2.70 (95% CI 2.68–2.72) times the rate of metropolitan-dwellers. DGA rates for non-Indigenous persons were 4.88 (95% CI 4.73–5.03) times those of Indigenous persons. The DGA rate for 1+ extractions was 461.9 per 100,000 (95% CI 460.6–463.2), compared with a rate of 23.6 per 100,000 (95% CI 23.3–23.9) for 1+ restorations. Conclusion Nearly two-thirds of DGAs were for treatment of impacted teeth. Persons aged 15–19 years were disproportionately represented among those receiving DGA care, along with females, rural/remote-dwellers and those identifying as non-Indigenous. More research is required to better understand the public health implications of DGA care among 15+-year-olds, and how the demand for receipt of such care might be reduced.Lisa M Jamieson and Kaye F Roberts-Thomso
    corecore