217 research outputs found
A sensitivity analysis for causal parameters in structural proportional hazards models
Deviations from assigned treatment occur often in clinical trials. In such a setting, the traditional intent-to-treat analysis does not measure biological efficacy but rather programmatic effectiveness. For all-or-nothing compliance situation, Loeys and Goetghebeur (2003) recently proposed a Structural Proportional Hazards method. It allows for causal estimation in the complier subpopulation provided the exclusion restriction holds: randomization per se has no effect unless exposure has changed. This assumption is typically made with structural models for noncompliance but questioned when the trial is not blinded. In this paper we extend the structural PH model to allow for an effect of randomization per se. This enables analyzing sensitivity of conclusions to deviations from the exclusion restriction. In a colo-rectal cancer trial we find the causal estimator of the effect of an arterial device implantation to be remarkably insensitive to such deviations
Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR
Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge.
We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically
more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness.
The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial
dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction
of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided.JRC.I.3-Molecular Biology and Genomic
Evaluation of tuberculosis diagnostic test accuracy using Bayesian latent class analysis in the presence of conditional dependence between the diagnostic tests used in a community-based tuberculosis screening study
Diagnostic accuracy studies in pulmonary tuberculosis (PTB) are complicated by the lack of a perfect reference standard. This limitation can be handled using latent class analysis (LCA), assuming independence between diagnostic test results conditional on the true unobserved PTB status. Test results could remain dependent, however, e.g. with diagnostic tests based on a similar biological basis. If ignored, this gives misleading inferences. Our secondary analysis of data collected during the first year (May 2018 -May 2019) of a community-based multi-morbidity screening program conducted in the rural uMkhanyakude district of KwaZulu Natal, South Africa, used Bayesian LCA. Residents of the catchment area, aged >/=15 years and eligible for microbiological testing, were analyzed. Probit regression methods for dependent binary data sequentially regressed each binary test outcome on other observed test results, measured covariates and the true unobserved PTB status. Unknown model parameters were assigned Gaussian priors to evaluate overall PTB prevalence and diagnostic accuracy of 6 tests used to screen for PTB: any TB symptom, radiologist conclusion, Computer Aided Detection for TB version 5 (CAD4TBv5>/=53), CAD4TBv6>/=53, Xpert Ultra (excluding trace) and culture. Before the application of our proposed model, we evaluated its performance using a previously published childhood pulmonary TB (CPTB) dataset. Standard LCA assuming conditional independence yielded an unrealistic prevalence estimate of 18.6% which was not resolved by accounting for conditional dependence among the true PTB cases only. Allowing, also, for conditional dependence among the true non-PTB cases produced a 1.1% plausible prevalence. After incorporating age, sex, and HIV status in the analysis, we obtained 0.9% (95% CrI: 0.6, 1.3) overall prevalence. Males had higher PTB prevalence compared to females (1.2% vs. 0.8%). Similarly, HIV+ had a higher PTB prevalence compared to HIV- (1.3% vs. 0.8%). The overall sensitivity for Xpert Ultra (excluding trace) and culture were 62.2% (95% CrI: 48.7, 74.4) and 75.9% (95% CrI: 61.9, 89.2), respectively. Any chest X-ray abnormality, CAD4TBv5>/=53 and CAD4TBv6>/=53 had similar overall sensitivity. Up to 73.3% (95% CrI: 61.4, 83.4) of all true PTB cases did not report TB symptoms. Our flexible modelling approach yields plausible, easy-to-interpret estimates of sensitivity, specificity and PTB prevalence under more realistic assumptions. Failure to fully account for diagnostic test dependence can yield misleading inferences
On Testing Dependence between Time to Failure and Cause of Failure when Causes of Failure Are Missing
The hypothesis of independence between the failure time and the cause of failure is studied by using the conditional probabilities of failure due to a specific cause given that there is no failure up to certain fixed time. In practice, there are situations when the failure times are available for all units but the causes of failures might be missing for some units. We propose tests based on U-statistics to test for independence of the failure time and the cause of failure in the competing risks model when all the causes of failure cannot be observed. The asymptotic distribution is normal in each case. Simulation studies look at power comparisons for the proposed tests for two families of distributions. The one-sided and the two-sided tests based on Kendall type statistic perform exceedingly well in detecting departures from independence
Inverse probability of treatment weighting with generalized linear outcome models for doubly robust estimation
There are now many options for doubly robust estimation; however, there is a
concerning trend in the applied literature to believe that the combination of a
propensity score and an adjusted outcome model automatically results in a
doubly robust estimator and/or to misuse more complex established doubly robust
estimators. A simple alternative, canonical link generalized linear models
(GLM) fit via inverse probability of treatment (propensity score) weighted
maximum likelihood estimation followed by standardization (the g-formula) for
the average causal effect, is a doubly robust estimation method. Our aim is for
the reader not just to be able to use this method, which we refer to as IPTW
GLM, for doubly robust estimation, but to fully understand why it has the
doubly robust property. For this reason, we define clearly, and in multiple
ways, all concepts needed to understand the method and why it is doubly robust.
In addition, we want to make very clear that the mere combination of propensity
score weighting and an adjusted outcome model does not generally result in a
doubly robust estimator. Finally, we hope to dispel the misconception that one
can adjust for residual confounding remaining after propensity score weighting
by adjusting in the outcome model for what remains `unbalanced' even when using
doubly robust estimators. We provide R code for our simulations and real
open-source data examples that can be followed step-by-step to use and
hopefully understand the IPTW GLM method. We also compare to a much
better-known but still simple doubly robust estimator
Impact of allelic dropout on evidential value of forensic DNA profiles using RMNE
Motivation: Two methods are commonly used to report on evidence carried by forensic DNA profiles: the ‘Random Man Not Excluded’ (RMNE) approach and the likelihood ratio (LR) approach. It is often claimed a major advantage of the LR method that dropout can be assessed probabilistically
Propensity score weighting plus an adjusted proportional hazards model does not equal doubly robust away from the null
Recently it has become common for applied works to combine commonly used
survival analysis modeling methods, such as the multivariable Cox model, and
propensity score weighting with the intention of forming a doubly robust
estimator that is unbiased in large samples when either the Cox model or the
propensity score model is correctly specified. This combination does not, in
general, produce a doubly robust estimator, even after regression
standardization, when there is truly a causal effect. We demonstrate via
simulation this lack of double robustness for the semiparametric Cox model, the
Weibull proportional hazards model, and a simple proportional hazards flexible
parametric model, with both the latter models fit via maximum likelihood. We
provide a novel proof that the combination of propensity score weighting and a
proportional hazards survival model, fit either via full or partial likelihood,
is consistent under the null of no causal effect of the exposure on the outcome
under particular censoring mechanisms if either the propensity score or the
outcome model is correctly specified and contains all confounders. Given our
results suggesting that double robustness only exists under the null, we
outline two simple alternative estimators that are doubly robust for the
survival difference at a given time point (in the above sense), provided the
censoring mechanism can be correctly modeled, and one doubly robust method of
estimation for the full survival curve. We provide R code to use these
estimators for estimation and inference in the supplementary materials
PROPEL: implementation of an evidence based pelvic floor muscle training intervention for women with pelvic organ prolapse: a realist evaluation and outcomes study protocol
Abstract Background Pelvic Organ Prolapse (POP) is estimated to affect 41%–50% of women aged over 40. Findings from the multi-centre randomised controlled “Pelvic Organ Prolapse PhysiotherapY” (POPPY) trial showed that individualised pelvic floor muscle training (PFMT) was effective in reducing symptoms of prolapse, improved quality of life and showed clear potential to be cost-effective. However, provision of PFMT for prolapse continues to vary across the UK, with limited numbers of women’s health physiotherapists specialising in its delivery. Implementation of this robust evidence from the POPPY trial will require attention to different models of delivery (e.g. staff skill mix) to fit with differing care environments. Methods A Realist Evaluation (RE) of implementation and outcomes of PFMT delivery in contrasting NHS settings will be conducted using multiple case study sites. Involving substantial local stakeholder engagement will permit a detailed exploration of how local sites make decisions on how to deliver PFMT and how these lead to service change. The RE will track how implementation is working; identify what influences outcomes; and, guided by the RE-AIM framework, will collect robust outcomes data. This will require mixed methods data collection and analysis. Qualitative data will be collected at four time-points across each site to understand local contexts and decisions regarding options for intervention delivery and to monitor implementation, uptake, adherence and outcomes. Patient outcome data will be collected at baseline, six months and one year follow-up for 120 women. Primary outcome will be the Pelvic Organ Prolapse Symptom Score (POP-SS). An economic evaluation will assess the costs and benefits associated with different delivery models taking account of further health care resource use by the women. Cost data will be combined with the primary outcome in a cost effectiveness analysis, and the EQ-5D-5L data in a cost utility analysis for each of the different models of delivery. Discussion Study of the implementation of varying models of service delivery of PFMT across contrasting sites combined with outcomes data and a cost effectiveness analysis will provide insight into the implementation and value of different models of PFMT service delivery and the cost benefits to the NHS in the longer term
Exploring the perspectives and preferences for HTA across German healthcare stakeholders using a multi-criteria assessment of a pulmonary heart sensor as a case study
Background
Health technology assessment and healthcare decision-making are based on multiple criteria and evidence, and heterogeneous opinions of participating stakeholders. Multi-criteria decision analysis (MCDA) offers a potential framework to systematize this process and take different perspectives into account. The objectives of this study were to explore perspectives and preferences across German stakeholders when appraising healthcare interventions, using multi-criteria assessment of a heart pulmonary sensor as a case study.
Methods
An online survey of 100 German healthcare stakeholders was conducted using a comprehensive MCDA framework (EVIDEM V2.2). Participants were asked to provide i) relative weights for each criterion of the framework; ii) performance scores for a health pulmonary sensor, based on available data synthesized for each criterion; and iii) qualitative feedback on the consideration of contextual criteria. Normalized weights and scores were combined using a linear model to calculate a value estimate across different stakeholders. Differences across types of stakeholders were explored.
Results
The survey was completed by 54 participants. The most important criteria were efficacy, patient reported outcomes, disease severity, safety, and quality of evidence (relative weight >0.075 each). Compared to all participants, policymakers gave more weight to budget impact and quality of evidence. The quantitative appraisal of a pulmonary heart sensor revealed differences in scoring performance of this intervention at the criteria level between stakeholder groups. The highest value estimate of the sensor reached 0.68 (on a scale of 0 to 1, 1 representing maximum value) for industry representatives and the lowest value of 0.40 was reported for policymakers, compared to 0.48 for all participants. Participants indicated that most qualitative criteria should be considered and their impact on the quantitative appraisal was captured transparently.
Conclusions
The study identified important variations in perspectives across German stakeholders when appraising a healthcare intervention and revealed that MCDA can demonstrate the value of a specified technology for all participating stakeholders. Better understanding of these differences at the criteria level, in particular between policymakers and industry representatives, is important to focus innovation aligned with patient health and healthcare system values and constraints
- …