545 research outputs found

    Shrinkage Bayesian Causal Forests for Heterogeneous Treatment Effects Estimation

    Get PDF
    This article develops a sparsity-inducing version of Bayesian Causal Forests, a recently proposed nonparametric causal regression model that employs Bayesian Additive Regression Trees and is specifically designed to estimate heterogeneous treatment effects using observational data. The sparsity-inducing component we introduce is motivated by empirical studies where not all the available covariates are relevant, leading to different degrees of sparsity underlying the surfaces of interest in the estimation of individual treatment effects. The extended version presented in this work, which we name Shrinkage Bayesian Causal Forest, is equipped with an additional pair of priors allowing the model to adjust the weight of each covariate through the corresponding number of splits in the tree ensemble. These priors improve the model’s adaptability to sparse data generating processes and allow to perform fully Bayesian feature shrinkage in a framework for treatment effects estimation, and thus to uncover the moderating factors driving heterogeneity. In addition, the method allows prior knowledge about the relevant confounding covariates and the relative magnitude of their impact on the outcome to be incorporated in the model. We illustrate the performance of our method in simulated studies, in comparison to Bayesian Causal Forest and other state-of-the-art models, to demonstrate how it scales up with an increasing number of covariates and how it handles strongly confounded scenarios. Finally, we also provide an example of application using real-world data. Supplementary materials for this article are available online

    Marginalization of Regression-Adjusted Treatment Effects in Indirect Comparisons with Limited Patient-Level Data

    Get PDF
    Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is sensitive to poor covariate overlap and cannot extrapolate beyond the observed covariate space. Current outcome regression-based alternatives can extrapolate but target a conditional treatment effect that is incompatible in the indirect comparison. When adjusting for covariates, one must integrate or average the conditional estimate over the population of interest to recover a compatible marginal treatment effect. We propose a marginalization method based on parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. In addition, we introduce a novel general-purpose method based on multiple imputation, which we term multiple imputation marginalization (MIM) and is applicable to a wide range of models. Both methods can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework. A simulation study provides proof-of-principle for the methods and benchmarks their performance against MAIC and the conventional outcome regression. The marginalized outcome regression approaches achieve more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yield unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, the marginalized regression-adjusted estimates provide greater precision and accuracy than the conditional estimates produced by the conventional outcome regression, which are systematically biased because the measure of effect is non-collapsible

    Estimating Individual Treatment Effects using Non-Parametric Regression Models: a Review

    Get PDF
    Large observational data are increasingly available in disciplines such as health, economic and social sciences, where researchers are interested in causal questions rather than prediction. In this paper, we investigate the problem of estimating heterogeneous treatment effects using non-parametric regression-based methods. Firstly, we introduce the setup and the issues related to conducting causal inference with observational or non-fully randomized data, and how these issues can be tackled with the help of statistical learning tools. Then, we provide a review of state-of-the-art methods, with a particular focus on non-parametric modeling, and we cast them under a unifying taxonomy. After presenting a brief overview on the problem of model selection, we illustrate the performance of some of the methods on three different simulated studies and on a real world example to investigate the effect of participation in school meal programs on health indicators

    Extended brief intervention to address alcohol misuse in people with mild to moderate intellectual disabilities living in the community (EBI-ID): study protocol for a randomised controlled trial.

    Get PDF
    There is some evidence that people with intellectual disabilities who live in the community are exposed to the same risks of alcohol use as the rest of the population. Various interventions have been evaluated in the general population to tackle hazardous or harmful drinking and alcohol dependence, but the literature evaluating interventions is very limited regarding intellectual disabilities. The National Institute for Health and Clinical Excellence recommends that brief and extended brief interventions be used to help young persons and adults who have screened as positive for hazardous and harmful drinking. The objective of this trial is to investigate the feasibility of adapting and delivering an extended brief intervention (EBI) to persons with mild/moderate intellectual disability who live in the community and whose level of drinking is harmful or hazardous

    Joint Longitudinal Models for Dealing With Missing at Random Data in Trial-Based Economic Evaluations

    Get PDF
    OBJECTIVES: In trial-based economic evaluation, some individuals are typically associated with missing data at some time point, so that their corresponding aggregated outcomes (eg, quality-adjusted life-years) cannot be evaluated. Restricting the analysis to the complete cases is inefficient and can result in biased estimates, while imputation methods are often implemented under a missing at random (MAR) assumption. We propose the use of joint longitudinal models to extend standard approaches by taking into account the longitudinal structure to improve the estimation of the targeted quantities under MAR. METHODS: We compare the results from methods that handle missingness at an aggregated (case deletion, baseline imputation, and joint aggregated models) and disaggregated (joint longitudinal models) level under MAR. The methods are compared using a simulation study and applied to data from 2 real case studies. RESULTS: Simulations show that, according to which data affect the missingness process, aggregated methods may lead to biased results, while joint longitudinal models lead to valid inferences under MAR. The analysis of the 2 case studies support these results as both parameter estimates and cost-effectiveness results vary based on the amount of data incorporated into the model. CONCLUSIONS: Our analyses suggest that methods implemented at the aggregated level are potentially biased under MAR as they ignore the information from the partially observed follow-up data. This limitation can be overcome by extending the analysis to a longitudinal framework using joint models, which can incorporate all the available evidence

    Estimating the Expected Value of Partial Perfect Information in Health Economic Evaluations using Integrated Nested Laplace Approximation

    Get PDF
    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the "cost" of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional Gaussian Processes, often substantially. We demonstrate that the EVPPI calculated using our method for Gaussian Process regression is in line with the standard Gaussian Process regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently

    Additive energy forward curves in a Heath-Jarrow-Morton framework

    Get PDF
    One of the peculiarities of power and gas markets is the delivery mechanism of forward contracts. The seller of a futures contract commits to deliver, say, power, over a certain period, while the classical forward is a financial agreement settled on a maturity date. Our purpose is to design a Heath-Jarrow-Morton framework for an additive, mean-reverting, multicommodity market consisting of forward contracts of any delivery period. The main assumption is that forward prices can be represented as affine functions of a universal source of randomness. This allows us to completely characterize the models which prevent arbitrage opportunities: this boils down to finding a density between a risk-neutral measure Q\mathbb{Q}, such that the prices of traded assets like forward contracts are true Q\mathbb{Q}-martingales, and the real world probability measure P\mathbb{P}, under which forward prices are mean-reverting. The Girsanov kernel for such a transformation turns out to be stochastic and unbounded in the diffusion part, while in the jump part the Girsanov kernel must be deterministic and bounded: thus, in this respect, we prove two results on the martingale property of stochastic exponentials. The first allows to validate measure changes made of two components: an Esscher-type density and a Girsanov transform with stochastic and unbounded kernel. The second uses a different approach and works for the case of continuous density. We apply this framework to two models: a generalized Lucia-Schwartz model and a cross-commodity cointegrated market.Comment: 28 page

    "Fishing na everybody business": women's work and gender relations in Sierra Leone's fisheries

    Get PDF
    While small-scale marine fisheries in many developing countries is "everybody’s business", a strong gendered division of labour sees production concentrated in the hands of male fishermen - while women - ‘fish mammies’ - invariably dominate the post-harvest processing and retailing sector. Consequently, the production bias of many fisheries management programmes has not only largely overlooked the critical role that fisherwomen play in the sector, but has also seen ‘fish mammies’ marginalised in terms of resource and training support. This paper employs a gender aware livelihoods framework to make the economic space occupied by women in the small-scale fisheries sector in Sierra Leone more ‘visible’, and highlights how their variegated access to different livelihood capitals and resources interact with gendered social norms and women’s reproductive work. We argue for more social and economic investments in women’s fish processing and reproductive work, so as to enable them to reconcile both roles more effectively

    Regulatory approval of pharmaceuticals without a randomised controlled study: analysis of EMA and FDA approvals 1999-2014

    Get PDF
    INTRODUCTION: The efficacy of pharmaceuticals is most often demonstrated by randomised controlled trials (RCTs); however, in some cases, regulatory applications lack RCT evidence. OBJECTIVE: To investigate the number and type of these approvals over the past 15 years by the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA). METHODS: Drug approval data were downloaded from the EMA website and the 'Drugs@FDA' database for all decisions on pharmaceuticals published from 1 January 1999 to 8 May 2014. The details of eligible applications were extracted, including the therapeutic area, type of approval and review period. RESULTS: Over the period of the study, 76 unique indications were granted without RCT results (44 by the EMA and 60 by the FDA), demonstrating that a substantial number of treatments reach the market without undergoing an RCT. The majority was for haematological malignancies (34), with the next most common areas being oncology (15) and metabolic conditions (15). Of the applications made to both agencies with a comparable data package, the FDA granted more approvals (43/44 vs 35/44) and took less time to review products (8.7 vs 15.5 months). Products reached the market first in the USA in 30 of 34 cases (mean 13.1 months) due to companies making FDA submission before EMA submissions and faster FDA review time. DISCUSSION: Despite the frequency with which approvals are granted without RCT results, there is no systematic monitoring of such treatments to confirm their effectiveness or consistency regarding when this form of evidence is appropriate. We recommend a more open debate on the role of marketing authorisations granted without RCT results, and the development of guidelines on what constitutes an acceptable data package for regulators
    corecore