267 research outputs found
A Flexible Joint Longitudinal-Survival Model for Analysis of End-Stage Renal Disease Data
We propose a flexible joint longitudinal-survival framework to examine the
association between longitudinally collected biomarkers and a time-to-event
endpoint. More specifically, we use our method for analyzing the survival
outcome of end-stage renal disease patients with time-varying serum albumin
measurements. Our proposed method is robust to common parametric assumptions in
that it avoids explicit distributional assumptions on longitudinal measures and
allows for subject-specific baseline hazard in the survival component. Fully
joint estimation is performed to account for the uncertainty in the estimated
longitudinal biomarkers included in the survival model
Recommended from our members
On the Use of Local Assessments for Monitoring Centrally Reviewed Endpoints with Missing Data in Clinical Trials*
Due to ethical and logistical concerns it is common for data monitoring committees to periodically monitor accruing clinical trial data to assess the safety, and possibly efficacy, of a new experimental treatment. When formalized, monitoring is typically implemented using group sequential methods. In some cases regulatory agencies have required that primary trial analyses should be based solely on the judgment of an independent review committee (IRC). The IRC assessments can produce difficulties for trial monitoring given the time lag typically associated with receiving assessments from the IRC. This results in a missing data problem wherein a surrogate measure of response may provide useful information for interim decisions and future monitoring strategies. In this paper, we present statistical tools that are helpful for monitoring a group sequential clinical trial with missing IRC data. We illustrate the proposed methodology in the case of binary endpoints under various missingness mechanisms including missing completely at random assessments and when missingness depends on the IRC’s measurement
Evaluating a Group Sequential Design in the Setting of Nonproportional Hazards
Group sequential methods have been widely described and implemented in a clinical trial setting where parametric and semiparametric models are deemed suitable. In these situations, the evaluation of the operating characteristics of a group sequential stopping rule remains relatively straightforward. However, in the presence of nonproportional hazards survival data nonparametric methods are often used, and the evaluation of stopping rules is no longer a trivial task. Specifically, nonparametric test statistics do not necessarily correspond to a parameter of clinical interest, thus making it difficult to characterize alternatives at which operating characteristics are to be computed. We describe an approach for constructing alternatives under nonproportional hazards using pre-existing pilot data, allowing one to evaluate various operating characteristics of candidate group sequential stopping rules. The method is illustrated via a case study in which testing is based upon a weighted logrank statistic
Choosing the Right Approach at the Right Time: A Comparative Analysis of Casual Effect Estimation using Confounder Adjustment and Instrumental Variables
In observational studies, unobserved confounding is a major barrier in
isolating the average causal effect (ACE). In these scenarios, two main
approaches are often used: confounder adjustment for causality (CAC) and
instrumental variable analysis for causation (IVAC). Nevertheless, both are
subject to untestable assumptions and, therefore, it may be unclear which
assumption violation scenarios one method is superior in terms of mitigating
inconsistency for the ACE. Although general guidelines exist, direct
theoretical comparisons of the trade-offs between CAC and the IVAC assumptions
are limited. Using ordinary least squares (OLS) for CAC and two-stage least
squares (2SLS) for IVAC, we analytically compare the relative inconsistency for
the ACE of each approach under a variety of assumption violation scenarios and
discuss rules of thumb for practice. Additionally, a sensitivity framework is
proposed to guide analysts in determining which approach may result in less
inconsistency for estimating the ACE with a given dataset. We demonstrate our
findings both through simulation and an application examining whether maternal
stress during pregnancy affects a neonate's birthweight. The implications of
our findings for causal inference practice are discussed, providing guidance
for analysts for judging whether CAC or IVAC may be more appropriate for a
given situation
An Alternative Perspective on Consensus Priors with Applications to Phase I Clinical Trials
We occasionally need to make a decision or a series of decisions based on a small sample. In some cases, an investigator is knowledgeable about a parameter of interest in some degrees or is accessible to various sources of prior information. Yet, two or more experts cannot have an identical prior distribution for the parameter. In this manuscript, we discuss the use of a consensus prior and compare two classes of Bayes estimators. In the first class of Bayes estimators, the contribution of each prior opinion is determined by observing data. In the second class, the contribution of each prior opinion is determined after observing data. Bayesian designs for Phase I clinical trials allocate trial participants at new experimental doses based on accumulated information, while the typical sample sizes are fairly small. Using simulations, we illustrate the usefulness of a combined estimate in the early phase clinical trials
A Bayesian Framework for Non-Collapsible Models
In this paper, we discuss the non-collapsibility concept and propose a new
approach based on Dirichlet process mixtures to estimate the conditional effect
of covariates in non-collapsible models. Using synthetic data, we evaluate the
performance of our proposed method and examine its sensitivity under different
settings. We also apply our method to real data on access failure among
hemodialysis patients
Frequentist Evaluation of Group Sequential Clinical Trial Designs
Group sequential stopping rules are often used as guidelines in the monitoring of clinical trials in order to address the ethical and efficiency issues inherent in human testing of a new treatment or preventive agent for disease. Such stopping rules have been proposed based on a variety of different criteria, both scientific (e.g., estimates of treatment effect) and statistical (e.g., frequentist type I error, Bayesian posterior probabilities, stochastic curtailment). It is easily shown, however, that a stopping rule based on one of those criteria induces a stopping rule on all other criteria. Thus the basis used to initially define a stopping rule is relatively unimportant so long as the operating characteristics of the stopping rule are fully investigated. In this paper we describe how the frequentist operating characteristics of a particular stopping rule might be evaluated in order to ensure that the selected clinical trial design satisfies the constraints imposed by the many different disciplines represented by the clinical trial collaborators
Bayesian Evaluation of Group Sequential Clinical Trial Designs
Clincal trial designs often incorporate a sequential stopping rule to serve as a guide in the early termination of a study. When choosing a particular stopping rule, it is most common to examine frequentist operating characteristics such as type I error, statistical power, and precision of confi- dence intervals (Emerson, et al. [1]). Increasingly, however, clinical trials are designed and analyzed in the Bayesian paradigm. In this paper we describe how the Bayesian operating characteristics of a particular stopping rule might be evaluated and communicated to the scientific community. In particular, we consider a choice of probability models and a family of prior distributions that allows concise presentation of Bayesian properties for a specified sampling plan
On the Use of Stochastic Curtailment in Group Sequential Clinical Trials
Many different criteria have been proposed for the selection of a stopping rule for group sequen- tial trials. These include both scientific (e.g., estimates of treatment effect) and statistical (e.g., frequentist type I error, Bayesian posterior probabilities, stochastic curtailment) measures of the evidence for or against beneficial treatment effects. Because a stopping rule based on one of those criteria induces a stopping rule on all other criteria, the utility of any particular scale relates to the ease with which it allows a clinical trialist to search for sequential sampling plans having de- sirable operating characteristics. In this paper we examine the use of such measures as conditional power and predictive power in the definition of stopping rules, especially as they apply to decisions to terminate a study early for “futility”. We illustrate that stopping criteria based on stochastic curtailment are relatively difficult to interpret on the scientifically relevant scale of estimated treat- ment effects, as well as with respect to commonly used statistical measures such as unconditional power. We further argue that neither conditional power nor predictive power adhere to the stan- dard optimality criteria within either the frequentist or Bayesian data analysis paradigms. Thus when choosing a stopping rule for “futility”, we recommend the definition of stopping rules based on other criteria and careful evaluation of the frequentist and Bayesian operating characteristics that are of greatest scientific and statistical relevance
- …