31,031 research outputs found
Fused kernel-spline smoothing for repeatedly measured outcomes in a generalized partially linear model with functional single index
We propose a generalized partially linear functional single index risk score
model for repeatedly measured outcomes where the index itself is a function of
time. We fuse the nonparametric kernel method and regression spline method, and
modify the generalized estimating equation to facilitate estimation and
inference. We use local smoothing kernel to estimate the unspecified
coefficient functions of time, and use B-splines to estimate the unspecified
function of the single index component. The covariance structure is taken into
account via a working model, which provides valid estimation and inference
procedure whether or not it captures the true covariance. The estimation method
is applicable to both continuous and discrete outcomes. We derive large sample
properties of the estimation procedure and show a different convergence rate
for each component of the model. The asymptotic properties when the kernel and
regression spline methods are combined in a nested fashion has not been studied
prior to this work, even in the independent data case.Comment: Published at http://dx.doi.org/10.1214/15-AOS1330 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Marginal analysis of longitudinal count data in long sequences: Methods and applications to a driving study
Most of the available methods for longitudinal data analysis are designed and
validated for the situation where the number of subjects is large and the
number of observations per subject is relatively small. Motivated by the
Naturalistic Teenage Driving Study (NTDS), which represents the exact opposite
situation, we examine standard and propose new methodology for marginal
analysis of longitudinal count data in a small number of very long sequences.
We consider standard methods based on generalized estimating equations, under
working independence or an appropriate correlation structure, and find them
unsatisfactory for dealing with time-dependent covariates when the counts are
low. For this situation, we explore a within-cluster resampling (WCR) approach
that involves repeated analyses of random subsamples with a final analysis that
synthesizes results across subsamples. This leads to a novel WCR method which
operates on separated blocks within subjects and which performs better than all
of the previously considered methods. The methods are applied to the NTDS data
and evaluated in simulation experiments mimicking the NTDS.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS507 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Detection of risk factors for obesity in early childhood with quantile regression methods for longitudinal data
This article compares and discusses three different statistical methods for investigating risk factors for overweight and obesity in early childhood by means of the LISA study, a recent German birth cohort study with 3097 children. Since the definition of overweight and obesity is typically based on upper quantiles (90% and 97%) of the age specific body mass index (BMI) distribution, our aim was to model the influence of risk factors and age on these quantiles while as far as possible taking the longitudinal data structure into account. The following statistical regression models were chosen: additive mixed models, generalized additive models for location, scale and shape (GAMLSS), and distribution free quantile regression models. The methods were compared empirically by cross-validation and for the data at hand no model could be rated superior. Motivated by previous studies we explored whether there is an age-specific skewness of the BMI distribution. The investigated data does not suggest such an effect, even after adjusting for risk factors. Concerning risk factors, our results mainly confirm results obtained in previous studies. From a methodological point of view, we conclude that GAMLSS and distribution free quantile regression are promising approaches for longitudinal quantile regression, requiring, however, further extensions to fully account for longitudinal data structures
A review of R-packages for random-intercept probit regression in small clusters
Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
- …