26,477 research outputs found
Partially-Latent Class Models (pLCM) for Case-Control Studies of Childhood Pneumonia Etiology
In population studies on the etiology of disease, one goal is the estimation
of the fraction of cases attributable to each of several causes. For example,
pneumonia is a clinical diagnosis of lung infection that may be caused by
viral, bacterial, fungal, or other pathogens. The study of pneumonia etiology
is challenging because directly sampling from the lung to identify the
etiologic pathogen is not standard clinical practice in most settings. Instead,
measurements from multiple peripheral specimens are made. This paper introduces
the statistical methodology designed for estimating the population etiology
distribution and the individual etiology probabilities in the Pneumonia
Etiology Research for Child Health (PERCH) study of 9; 500 children for 7 sites
around the world. We formulate the scientific problem in statistical terms as
estimating the mixing weights and latent class indicators under a
partially-latent class model (pLCM) that combines heterogeneous measurements
with different error rates obtained from a case-control study. We introduce the
pLCM as an extension of the latent class model. We also introduce graphical
displays of the population data and inferred latent-class frequencies. The
methods are tested with simulated data, and then applied to PERCH data. The
paper closes with a brief description of extensions of the pLCM to the
regression setting and to the case where conditional independence among the
measures is relaxed.Comment: 25 pages, 4 figures, 1 supplementary materia
Willingness to Pay for Biodiesel in Diesel Engines: A Stochastic Double Bounded Contingent Valuation Survey
The double bounded dichotomous choice format has been proven to improve efficiency in contingent valuation models. However, this format has been criticized due to lack of behavioral and statistical consistencies between the first and the second responses. In this study a split sampling methodology was used to determine whether allowing respondents to express uncertainty in the follow-up question would alleviate such inconsistencies. Results indicate that allowing respondents to express uncertainty in the follow-up question was effective at reducing both types of inconsistencies while efficiency gain is maintained.Biodiesel, diesel, environmental benefits, contingent valuation, willingness to pay, double bounded model, and statistical and behavioral inconsistencies, Demand and Price Analysis, Resource /Energy Economics and Policy, I18, L91, Q42, Q51, Q53,
A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure
We often seek to estimate the impact of an exposure naturally occurring or
randomly assigned at the cluster-level. For example, the literature on
neighborhood determinants of health continues to grow. Likewise, community
randomized trials are applied to learn about real-world implementation,
sustainability, and population effects of interventions with proven
individual-level efficacy. In these settings, individual-level outcomes are
correlated due to shared cluster-level factors, including the exposure, as well
as social or biological interactions between individuals. To flexibly and
efficiently estimate the effect of a cluster-level exposure, we present two
targeted maximum likelihood estimators (TMLEs). The first TMLE is developed
under a non-parametric causal model, which allows for arbitrary interactions
between individuals within a cluster. These interactions include direct
transmission of the outcome (i.e. contagion) and influence of one individual's
covariates on another's outcome (i.e. covariate interference). The second TMLE
is developed under a causal sub-model assuming the cluster-level and
individual-specific covariates are sufficient to control for confounding.
Simulations compare the alternative estimators and illustrate the potential
gains from pairing individual-level risk factors and outcomes during
estimation, while avoiding unwarranted assumptions. Our results suggest that
estimation under the sub-model can result in bias and misleading inference in
an observational setting. Incorporating working assumptions during estimation
is more robust than assuming they hold in the underlying causal model. We
illustrate our approach with an application to HIV prevention and treatment
Recommended from our members
On optimal designs for clinical trials: An updated review
Optimization of clinical trial designs can help investigators achieve higher qualityresults for the given resource constraints. The present paper gives an overviewof optimal designs for various important problems that arise in different stages ofclinical drug development, including phase I doseâtoxicity studies; phase I/II studiesthat consider early efficacy and toxicity outcomes simultaneously; phase IIdoseâresponse studies driven by multiple comparisons (MCP), modeling techniques(Mod), or their combination (MCPâMod); phase III randomized controlled multiarmmulti-objective clinical trials to test difference among several treatment groups;and population pharmacokineticsâpharmacodynamics experiments. We find thatmodern literature is very rich with optimal design methodologies that can be utilizedby clinical researchers to improve efficiency of drug development
Recommended from our members
Statistical Methods to Improve Efficiency in Composite Endpoint Analysis
Composite endpoints combine a number of outcomes to assess the efficacy of a treatment.
They are used in situations where it is difficult to identify a single relevant endpoint,
such as in complex multisystem diseases. Our focus in this thesis is on composite
responder endpoints, which allocate patients as either ârespondersâ or ânon-respondersâ
based on whether they cross predefined thresholds in the individual outcomes. These
composites are often combinations of continuous and discrete measures and are typically
collapsed into a single binary endpoint and analysed using logistic regression. However,
this is at the expense of losing information on how close each patient was to the responder
threshold. As well as being inefficient the analysis is sensitive to misclassification
due to measurement error. The augmented binary method was introduced to improve
the analysis of composite responder endpoints comprised of a single continuous and
binary endpoint, by making use of the continuous information.
In this thesis we build on this work to address some of the existing limitations. We
implement small sample corrections for the standard binary and augmented binary
methods and assess the performance for application in rare disease trials, where the
gains are most needed. We find that employing the small sample corrected augmented
binary method results in a reduction of required sample size of 32%. Motivated by
systemic lupus erythematosus (SLE), we consider the case where the composite has
multiple continuous, ordinal and binary components. We adapt latent variable models
for application to these endpoints and assess the performance in simulated data and
phase IIb trial data in SLE. Our findings show reductions in required sample size of at
least 60%, however the magnitude of the gains depends on which components drive
response. Finally, we develop a method for sample size estimation so that the model
may be used as a primary analysis method in clinical trials. We assess the impact of
correlation structure and drivers of response on the sample size required
- âŠ