1,572 research outputs found

    Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: main content

    Full text link
    Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results

    Tactile Interactions with a Humanoid Robot : Novel Play Scenario Implementations with Children with Autism

    Get PDF
    Acknowledgments: This work has been partially supported by the European Commission under contract number FP7-231500-ROBOSKIN. Open Access: This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.The work presented in this paper was part of our investigation in the ROBOSKIN project. The project has developed new robot capabilities based on the tactile feedback provided by novel robotic skin, with the aim to provide cognitive mechanisms to improve human-robot interaction capabilities. This article presents two novel tactile play scenarios developed for robot-assisted play for children with autism. The play scenarios were developed against specific educational and therapeutic objectives that were discussed with teachers and therapists. These objectives were classified with reference to the ICF-CY, the International Classification of Functioning – version for Children and Youth. The article presents a detailed description of the play scenarios, and case study examples of their implementation in HRI studies with children with autism and the humanoid robot KASPAR.Peer reviewedFinal Published versio

    Causal inference for long-term survival in randomised trials with treatment switching: Should re-censoring be applied when estimating counterfactual survival times?

    Get PDF
    Treatment switching often has a crucial impact on estimates of effectiveness and cost-effectiveness of new oncology treatments. Rank preserving structural failure time models (RPSFTM) and two-stage estimation (TSE) methods estimate ‘counterfactual’ (i.e. had there been no switching) survival times and incorporate re-censoring to guard against informative censoring in the counterfactual dataset. However, re-censoring causes a loss of longer term survival information which is problematic when estimates of long-term survival effects are required, as is often the case for health technology assessment decision making. We present a simulation study designed to investigate applications of the RPSFTM and TSE with and without re-censoring, to determine whether re-censoring should always be recommended within adjustment analyses. We investigate a context where switching is from the control group onto the experimental treatment in scenarios with varying switch proportions, treatment effect sizes and time-dependencies, disease severity and switcher prognosis. Methods were assessed according to their estimation of control group restricted mean survival (that would be observed in the absence of switching) at the end of the simulated trial follow-up. We found that RPSFTM and TSE analyses which incorporated re-censoring usually produced negative bias (i.e. under-estimating control group restricted mean survival and therefore over-estimating the treatment effect). RPSFTM and TSE analyses that did not incorporate re-censoring consistently produced positive bias (i.e. under-estimating the treatment effect) which was often smaller in magnitude than the bias associated with the re-censored analyses. We believe that analyses should be conducted with and without re-censoring, as this may provide decision makers with useful information on where the true treatment effect is likely to lie. Analyses that incorporate re-censoring should not always represent the default approach when the objective is to estimate long-term survival times and treatment effects on long-term survival

    Assessing methods for dealing with treatment switching in clinical trials: A follow-up simulation study

    Get PDF
    When patients randomised to the control group of a randomised controlled trial are allowed to switch onto the experimental treatment, intention-to-treat analyses of the treatment effect are confounded because the separation of randomised groups is lost. Previous research has investigated statistical methods that aim to estimate the treatment effect that would have been observed had this treatment switching not occurred and has demonstrated their performance in a limited set of scenarios. Here, we investigate these methods in a new range of realistic scenarios, allowing conclusions to be made based upon a broader evidence base. We simulated randomised controlled trials incorporating prognosis-related treatment switching and investigated the impact of sample size, reduced switching proportions, disease severity, and alternative data-generating models on the performance of adjustment methods, assessed through a comparison of bias, mean squared error, and coverage, related to the estimation of true restricted mean survival in the absence of switching in the control group. Rank preserving structural failure time models, inverse probability of censoring weights, and two-stage methods consistently produced less bias than the intentionto-treat analysis. The switching proportion was confirmed to be a key determinant of bias: sample size and censoring proportion were relatively less important. It is critical to determine the size of the treatment effect in terms of an acceleration factor (rather than a hazard ratio) to provide information on the likely bias associated with rank-preserving structural failure time model adjustments. In general, inverse probability of censoring weight methods are more volatile than other adjustment methods

    The Risk of Virologic Failure Decreases with Duration of HIV Suppression, at Greater than 50% Adherence to Antiretroviral Therapy

    Get PDF
    Background: We hypothesized that the percent adherence to antiretroviral therapy necessary to maintain HIV suppression would decrease with longer duration of viral suppression. Methodology: Eligible participants were identified from the REACH cohort of marginally housed HIV infected adults in San Francisco. Adherence to antiretroviral therapy was measured through pill counts obtained at unannounced visits by research staff to each participant's usual place of residence. Marginal structural models and targeted maximum likelihood estimation methodologies were used to determine the effect of adherence to antiretroviral therapy on the probability of virologic failure during early and late viral suppression. Principal Findings: A total of 221 subjects were studied (median age 44.1 years; median CD4+ T cell nadir 206 cells/mm3). Most subjects were taking the following types of antiretroviral regimens: non-nucleoside reverse transcriptase inhibitor based (37%), ritonavir boosted protease inhibitor based (28%), or unboosted protease inhibitor based (25%). Comparing the probability of failure just after achieving suppression vs. after 12 consecutive months of suppression, there was a statistically significant decrease in the probability of virologic failure for each range of adherence proportions we considered, as long as adherence was greater than 50%. The estimated risk difference, comparing the probability of virologic failure after 1 month vs. after 12 months of continuous viral suppression was 0.47 (95% CI 0.23–0.63) at 50–74% adherence, 0.29 (CI 0.03–0.50) at 75–89% adherence, and 0.36 (CI 0.23–0.48) at 90–100% adherence. Conclusions: The risk of virologic failure for adherence greater than 50% declines with longer duration of continuous suppression. While high adherence is required to maximize the probability of durable viral suppression, the range of adherence capable of sustaining viral suppression is wider after prolonged periods of viral suppression

    Comparative quantification of health risks: Conceptual framework and methodological issues

    Get PDF
    Reliable and comparable analysis of risks to health is key for preventing disease and injury. Causal attribution of morbidity and mortality to risk factors has traditionally been conducted in the context of methodological traditions of individual risk factors, often in a limited number of settings, restricting comparability. In this paper, we discuss the conceptual and methodological issues for quantifying the population health effects of individual or groups of risk factors in various levels of causality using knowledge from different scientific disciplines. The issues include: comparing the burden of disease due to the observed exposure distribution in a population with the burden from a hypothetical distribution or series of distributions, rather than a single reference level such as non-exposed; considering the multiple stages in the causal network of interactions among risk factor(s) and disease outcome to allow making inferences about some combinations of risk factors for which epidemiological studies have not been conducted, including the joint effects of multiple risk factors; calculating the health loss due to risk factor(s) as a time-indexed "stream" of disease burden due to a time-indexed "stream" of exposure, including consideration of discounting; and the sources of uncertainty

    Informative noncompliance in endpoint trials

    Get PDF
    Noncompliance with study medications is an important issue in the design of endpoint clinical trials. Including noncompliant patient data in an intention-to-treat analysis could seriously decrease study power. Standard methods for calculating sample size account for noncompliance, but all assume that noncompliance is noninformative, i.e., that the risk of discontinuation is independent of the risk of experiencing a study endpoint. Using data from several published clinical trials (OPTIMAAL, LIFE, RENAAL, SOLVD-Prevention and SOLVD-Treatment), we demonstrate that this assumption is often untrue, and we discuss the effect of informative noncompliance on power and sample size

    Principled Selection of Baseline Covariates to Account for Censoring in Randomized Trials with a Survival Endpoint

    Full text link
    The analysis of randomized trials with time-to-event endpoints is nearly always plagued by the problem of censoring. As the censoring mechanism is usually unknown, analyses typically employ the assumption of non-informative censoring. While this assumption usually becomes more plausible as more baseline covariates are being adjusted for, such adjustment also raises concerns. Pre-specification of which covariates will be adjusted for (and how) is difficult, thus prompting the use of data-driven variable selection procedures, which may impede valid inferences to be drawn. The adjustment for covariates moreover adds concerns about model misspecification, and the fact that each change in adjustment set, also changes the censoring assumption and the treatment effect estimand. In this paper, we discuss these concerns and propose a simple variable selection strategy that aims to produce a valid test of the null in large samples. The proposal can be implemented using off-the-shelf software for (penalized) Cox regression, and is empirically found to work well in simulation studies and real data analyses

    Causal inference based on counterfactuals

    Get PDF
    BACKGROUND: The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. DISCUSSION: This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. SUMMARY: Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept
    corecore