226 research outputs found
Approximating Probability Densities by Iterated Laplace Approximations
The Laplace approximation is an old, but frequently used method to
approximate integrals for Bayesian calculations. In this paper we develop an
extension of the Laplace approximation, by applying it iteratively to the
residual, i.e., the difference between the current approximation and the true
function. The final approximation is thus a linear combination of multivariate
normal densities, where the coefficients are chosen to achieve a good fit to
the target distribution. We illustrate on real and artificial examples that the
proposed procedure is a computationally efficient alternative to current
approaches for approximation of multivariate probability densities. The
R-package iterLap implementing the methods described in this article is
available from the CRAN servers.Comment: to appear in Journal of Computational and Graphical Statistics,
http://pubs.amstat.org/loi/jcg
Subgroup identification in dose-finding trials via model-based recursive partitioning
An important task in early phase drug development is to identify patients,
which respond better or worse to an experimental treatment. While a variety of
different subgroup identification methods have been developed for the situation
of trials that study an experimental treatment and control, much less work has
been done in the situation when patients are randomized to different dose
groups. In this article we propose new strategies to perform subgroup analyses
in dose-finding trials and discuss the challenges, which arise in this new
setting. We consider model-based recursive partitioning, which has recently
been applied to subgroup identification in two arm trials, as a promising
method to tackle these challenges and assess its viability using a real trial
example and simulations. Our results show that model-based recursive
partitioning can be used to identify subgroups of patients with different
dose-response curves and improves estimation of treatment effects and minimum
effective doses, when heterogeneity among patients is present.Comment: 23 pages, 6 figure
On Nonparametric Bayesian Analysis under Shape Constraints with Applications in Biostatistics
Response-adaptive dose-finding under model uncertainty
Dose-finding studies are frequently conducted to evaluate the effect of
different doses or concentration levels of a compound on a response of
interest. Applications include the investigation of a new medicinal drug, a
herbicide or fertilizer, a molecular entity, an environmental toxin, or an
industrial chemical. In pharmaceutical drug development, dose-finding studies
are of critical importance because of regulatory requirements that marketed
doses are safe and provide clinically relevant efficacy. Motivated by a
dose-finding study in moderate persistent asthma, we propose response-adaptive
designs addressing two major challenges in dose-finding studies: uncertainty
about the dose-response models and large variability in parameter estimates. To
allocate new cohorts of patients in an ongoing study, we use optimal designs
that are robust under model uncertainty. In addition, we use a Bayesian
shrinkage approach to stabilize the parameter estimates over the successive
interim analyses used in the adaptations. This approach allows us to calculate
updated parameter estimates and model probabilities that can then be used to
calculate the optimal design for subsequent cohorts. The resulting designs are
hence robust with respect to model misspecification and additionally can
efficiently adapt to the information accrued in an ongoing study. We focus on
adaptive designs for estimating the minimum effective dose, although
alternative optimality criteria or mixtures thereof could be used, enabling the
design to address multiple objectives.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS445 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Model Selection versus Model Averaging in Dose Finding Studies
Phase II dose finding studies in clinical drug development are typically
conducted to adequately characterize the dose response relationship of a new
drug. An important decision is then on the choice of a suitable dose response
function to support dose selection for the subsequent Phase III studies. In
this paper we compare different approaches for model selection and model
averaging using mathematical properties as well as simulations. Accordingly, we
review and illustrate asymptotic properties of model selection criteria and
investigate their behavior when changing the sample size but keeping the effect
size constant. In a large scale simulation study we investigate how the various
approaches perform in realistically chosen settings. Finally, the different
methods are illustrated with a recently conducted Phase II dosefinding study in
patients with chronic obstructive pulmonary disease.Comment: Keywords and Phrases: Model selection; model averaging; clinical
trials; simulation stud
On the efficiency of adaptive designs
In this paper we develop a method to investigate the efficiency of two-stage adaptive designs from
a theoretical point of view. Our approach is based on an explicit expansion of the information matrix
for an adaptive design. The results enables one to compare the performance of adaptive designs
with non-adaptive designs, without having to rely on extensive simulation studies. We demonstrate
that their relative efficiency depends sensitively on the statistical problem under investigation and
derive some general conclusions when to prefer an adaptive or a non-adaptive design. In particular,
we show that in nonlinear regression models with moderate or large variances the first stage sample
size of an adaptive design should be chosen sufficiently large in order to address variability in the
interim parameter estimates. We illustrate the methodology with several examples
Bayesian outlier detection in INGARCH time series
INGARCH models for time series of counts arising, e.g., in
epidemiology assume the observations to be Poisson distributed conditionally
on the past, with the conditional mean being an affinelinear
function of the previous observations and the previous conditional
means. We model outliers within such processes, assuming that
we observe a contaminated process with additive Poisson distributed
contamination, affecting each observation with a small probability. Our
particular concern are additive outliers, which do not enter the dynamics
of the process and can represent measurement artifacts and other
singular events influencing a single observation. Such outliers are difficult
to handle within a non-Bayesian framework since the uncontaminated
values entering the dynamics of the process at contaminated time
points are unobserved. We propose a Bayesian approach to outlier modeling
in INGARCH processes, approximating the posterior distribution
of the model parameters by application of a componentwise Metropolis-
Hastings algorithm. Analyzing real and simulated data sets, we find
Bayesian outlier detection with non-informative priors to work well if
there are some outliers in the data
Using data mining to analyze job reviews
Job review websites like Glassdoor are not always clear on how well the company operates, especially as viewed from differing levels of employment. For instance, a middle or upper manager from Amazon may have an overall positive review of the company with minor issues about it, but someone who works in the warehouse may have a mixed experience. To solve this issue and determine any correlation between employee level and their review, data mining techniques were utilized such as website scraping and neural network training to develop a model that analyzes employee reviews
- …
