159 research outputs found

    IMPACT OF CFTA/NAFTA ON U.S. AND CANADIAN AGRICULTURE

    Get PDF
    CFTA/NAFTA is estimated annually to add 1,430millionofU.S.agriculturalexportstoCanadaand1,430 million of U.S. agricultural exports to Canada and 1,884 million of Canadian agricultural exports to the United States. Thus CFTA/NAFTA contributed an estimated 25 percent of the 5.8billionofU.S.agriculturalexportstoCanadain1995.Classicalwelfareanalysiswasusedtoestimatetheimplicationsoffreetradeinthedairy,poultry,sugar,andotherindustriesthatcontinuetobeprotected.Inaggregate,consumersbenefitfromliberalizationbynearly5.8 billion of U.S. agricultural exports to Canada in 1995. Classical welfare analysis was used to estimate the implications of free trade in the dairy, poultry, sugar, and other industries that continue to be protected. In aggregate, consumers benefit from liberalization by nearly 1 billion per year in each country. Losses to Canadian producers are absolutely and relatively greater than to U.S. producers. Overall deadweight gains are positive to each country. The annual combined two-country addition to national income (292million)totalsapresentvalueof292 million) totals a present value of 5.8 billion when discounted in perpetuity at a 5 percent rate.International Relations/Trade,

    Bayesian regression discontinuity designs: Incorporating clinical knowledge in the causal analysis of primary care data

    Get PDF
    The regression discontinuity (RD) design is a quasi-experimental design that estimates the causal effects of a treatment by exploiting naturally occurring treatment rules. It can be applied in any context where a particular treatment or intervention is administered according to a pre-specified rule linked to a continuous variable. Such thresholds are common in primary care drug prescription where the RD design can be used to estimate the causal effect of medication in the general population. Such results can then be contrasted to those obtained from randomised controlled trials (RCTs) and inform prescription policy and guidelines based on a more realistic and less expensive context. In this paper we focus on statins, a class of cholesterol-lowering drugs, however, the methodology can be applied to many other drugs provided these are prescribed in accordance to pre-determined guidelines. NHS guidelines state that statins should be prescribed to patients with 10 year cardiovascular disease risk scores in excess of 20%. If we consider patients whose scores are close to this threshold we find that there is an element of random variation in both the risk score itself and its measurement. We can thus consider the threshold a randomising device assigning the prescription to units just above the threshold and withholds it from those just below. Thus we are effectively replicating the conditions of an RCT in the area around the threshold, removing or at least mitigating confounding. We frame the RD design in the language of conditional independence which clarifies the assumptions necessary to apply it to data, and which makes the links with instrumental variables clear. We also have context specific knowledge about the expected sizes of the effects of statin prescription and are thus able to incorporate this into Bayesian models by formulating informative priors on our causal parameters.Comment: 21 pages, 5 figures, 2 table

    Survival extrapolation in the presence of cause specific hazards.

    Get PDF
    Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Because causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation, we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from implantable cardioverter defibrillators

    Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    Get PDF
    BACKGROUND: This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. METHODS: We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. APPLICATION: We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. CONCLUSIONS: State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models

    The role of statistics in the era of big data: Electronic health records for healthcare research

    Get PDF
    The transferring of medical records into huge electronic databases has opened up opportunities for research but requires attention to data quality, study design and issues of bias and confounding

    Exploring mechanisms of action in clinical trials of complex surgical interventions using mediation analysis.

    Get PDF
    BACKGROUND: Surgical interventions allow for tailoring of treatment to individual patients and implementation may vary with surgeon and healthcare provider. In addition, in clinical trials assessing two competing surgical interventions, the treatments may be accompanied by co-interventions. AIMS: This study explores the use of causal mediation analysis to (1) delineate the treatment effect that results directly from the surgical intervention under study and the indirect effect acting through a co-intervention and (2) to evaluate the benefit of the surgical intervention if either everybody in the trial population received the co-intervention or nobody received it. METHODS: Within a counterfactual framework, relevant direct and indirect effects of a surgical intervention are estimated and adjusted for confounding via parametric regression models, for the situation where both mediator and outcome are binary, with baseline stratification factors included as fixed effects and surgeons as random intercepts. The causal difference in probability of a successful outcome (estimand of interest) is calculated using Monte Carlo simulation with bootstrapping for confidence intervals. Packages for estimation within standard statistical software are reviewed briefly. A step by step application of methods is illustrated using the Amaze randomised trial of ablation as an adjunct to cardiac surgery in patients with irregular heart rhythm, with a co-intervention (removal of the left atrial appendage) administered to a subset of participants at the surgeon's discretion. The primary outcome was return to normal heart rhythm at one year post surgery. RESULTS: In Amaze, 17% (95% confidence interval: 6%, 28%) more patients in the active arm had a successful outcome, but there was a large difference between active and control arms in the proportion of patients who received the co-intervention (55% and 30%, respectively). Causal mediation analysis suggested that around 1% of the treatment effect was attributable to the co-intervention (16% natural direct effect). The controlled direct effect ranged from 18% (6%, 30%) if the co-intervention were mandated, to 14% (2%, 25%) if it were prohibited. Including age as a moderator of the mediation effects showed that the natural direct effect of ablation appeared to decrease with age. CONCLUSIONS: Causal mediation analysis is a useful quantitative tool to explore mediating effects of co-interventions in surgical trials. In Amaze, investigators could be reassured that the effect of the active treatment, not explainable by differential use of the co-intervention, was significant across analyses

    Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial.

    Get PDF
    Decision-analytic models must often be informed using data that are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters and how those parameters are related to quantities of interest for decision making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data and, at the same time, propagates all parameter uncertainties to the results used for decision making. To illustrate these methods, the authors demonstrate how a previously developed Markov model for the progression of human papillomavirus (HPV-16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV-16 and HPV-16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasize the appropriate choice of prior distributions and checking and comparison of fitted models

    Accuracy of time to treatment estimates in the CRASH-3 clinical trial: impact on the trial results.

    Get PDF
    BACKGROUND: Early treatment with tranexamic acid may reduce deaths after traumatic brain injury (TBI). In mild and moderate TBI, there is a time to treatment interaction, with early treatment being most beneficial. Time to treatment was recorded by clinicians and is subject to error. Using monitoring data from the CRASH-3 trial, we examine the impact of errors in time to treatment on estimated treatment effects. METHODS: The CRASH-3 trial was a randomised trial of the effect of tranexamic acid on death and vascular occlusive events in 12,737 TBI patients. This analysis includes the 8107 patients with a Glasgow coma scale score of 9 to 15 since previous analyses showed that these patients benefit most from early treatment. Clinician-recorded time to treatment was checked against ambulance and hospital records for 1368/12,737 (11%) patients. Patients who died were preferentially selected for monitoring and we monitored 36% of head injury deaths. We describe measurement errors using Bland-Altman graphs. We model the effect of tranexamic acid on head injury death using logistic regression with a time-treatment interaction term. We use regression calibration, multiple imputation and Bayesian analysis to estimate the impact of time to treatment errors. RESULTS: Clinicians rounded times to the nearest half or full hour in 66% of cases. Monitored times were also rounded and were identical to clinician times in 63% of patients. Times were underestimated by an average of 9 min (95% CI -?85, 66). There was more variability between clinician-recorded and monitored times in low- and middle-income countries than in high-income countries. The treatment effect estimate at 1 h was greater for monitored times OR?=?0.61 (95% CI 0.47, 0.81) than for clinician-recorded times OR?=?0.63 (95% CI 0.48, 0.83). All three adjustment methods gave similar time to treatment interactions. For Bayesian methods, the treatment effect at 1 h was OR?=?0.58 (95% CI 0.43, 0.78). Using monitored times increased the time-treatment interaction term from 1.15 (95% CI 1.03, 1.27) to 1.16 (95% CI 1.05, 1.28). CONCLUSIONS: Accurate estimation of time from injury to treatment is challenging, particularly in low resource settings. Adjustment for known errors in time to treatment had minimal impact on the trial results. TRIAL REGISTRATION: ClinicalTrials.gov NCT01402882 . Registered on 25 July 2011

    Mapping of the EQ-5D index from clinical outcome measures and demographic variables in patients with coronary heart disease.

    Get PDF
    RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.BACKGROUND: The EuroQoL 5D (EQ-5D) is a questionnaire that provides a measure of utility for cost-effectiveness analysis. The EQ-5D has been widely used in many patient groups, including those with coronary heart disease. Studies often require patients to complete many questionnaires and the EQ-5D may not be gathered. This study aimed to assess whether demographic and clinical outcome variables, including scores from a disease specific measure, the Seattle Angina Questionnaire (SAQ), could be used to predict, or map, the EQ-5D index value where it is not available. METHODS: Patient-level data from 5 studies of cardiac interventions were used. The data were split into two groups - approximately 60% of the data were used as an estimation dataset for building models, and 40% were used as a validation dataset. Forward ordinary least squares linear regression methods and measures of prediction error were used to build a model to map to the EQ-5D index. Age, sex, a proxy measure of disease stage, Canadian Cardiovascular Society (CCS) angina severity class, treadmill exercise time (ETT) and scales of the SAQ were examined. RESULTS: The exertional capacity (ECS), disease perception (DPS) and anginal frequency scales (AFS) of the SAQ were the strongest predictors of the EQ-5D index and gave the smallest root mean square errors. A final model was chosen with age, gender, disease stage and the ECS, DPS and AFS scales of the SAQ. ETT and CCS did not improve prediction in the presence of the SAQ scales. Bland-Altman agreement between predicted and observed EQ-5D index values was reasonable for values greater than 0.4, but below this level predicted values were higher than observed. The 95% limits of agreement were wide (-0.34, 0.33). CONCLUSIONS: Mapping of the EQ-5D index in cardiac patients from demographics and commonly measured cardiac outcome variables is possible; however, prediction for values of the EQ-5D index below 0.4 was not accurate. The newly designed 5-level version of the EQ-5D with its increased ability to discriminate health states may improve prediction of EQ-5D index values
    • …
    corecore