28,598 research outputs found
Estimation in Dirichlet random effects models
We develop a new Gibbs sampler for a linear mixed model with a Dirichlet
process random effect term, which is easily extended to a generalized linear
mixed model with a probit link function. Our Gibbs sampler exploits the
properties of the multinomial and Dirichlet distributions, and is shown to be
an improvement, in terms of operator norm and efficiency, over other commonly
used MCMC algorithms. We also investigate methods for the estimation of the
precision parameter of the Dirichlet process, finding that maximum likelihood
may not be desirable, but a posterior mode is a reasonable approach. Examples
are given to show how these models perform on real data. Our results complement
both the theoretical basis of the Dirichlet process nonparametric prior and the
computational work that has been done to date.Comment: Published in at http://dx.doi.org/10.1214/09-AOS731 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Hierarchical spatial models for predicting tree species assemblages across large domains
Spatially explicit data layers of tree species assemblages, referred to as
forest types or forest type groups, are a key component in large-scale
assessments of forest sustainability, biodiversity, timber biomass, carbon
sinks and forest health monitoring. This paper explores the utility of coupling
georeferenced national forest inventory (NFI) data with readily available and
spatially complete environmental predictor variables through spatially-varying
multinomial logistic regression models to predict forest type groups across
large forested landscapes. These models exploit underlying spatial associations
within the NFI plot array and the spatially-varying impact of predictor
variables to improve the accuracy of forest type group predictions. The
richness of these models incurs onerous computational burdens and we discuss
dimension reducing spatial processes that retain the richness in modeling. We
illustrate using NFI data from Michigan, USA, where we provide a comprehensive
analysis of this large study area and demonstrate improved prediction with
associated measures of uncertainty.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS250 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Bias in parametric estimation: reduction and useful side-effects
The bias of an estimator is defined as the difference of its expected value
from the parameter to be estimated, where the expectation is with respect to
the model. Loosely speaking, small bias reflects the desire that if an
experiment is repeated indefinitely then the average of all the resultant
estimates will be close to the parameter value that is estimated. The current
paper is a review of the still-expanding repository of methods that have been
developed to reduce bias in the estimation of parametric models. The review
provides a unifying framework where all those methods are seen as attempts to
approximate the solution of a simple estimating equation. Of particular focus
is the maximum likelihood estimator, which despite being asymptotically
unbiased under the usual regularity conditions, has finite-sample bias that can
result in significant loss of performance of standard inferential procedures.
An informal comparison of the methods is made revealing some useful practical
side-effects in the estimation of popular models in practice including: i)
shrinkage of the estimators in binomial and multinomial regression models that
guarantees finiteness even in cases of data separation where the maximum
likelihood estimator is infinite, and ii) inferential benefits for models that
require the estimation of dispersion or precision parameters
- …