80,725 research outputs found
Flexible and practical modeling of animal telemetry data: hidden Markov models and extensions
We discuss hidden Markov-type models for fitting a variety of multistate random walks to wildlife movement data. Discrete-time hidden Markov models (HMMs) achieve considerable computational gains by focusing on observations that are regularly spaced in time, and for which the measurement error is negligible. These conditions are often met, in particular for data related to terrestrial animals, so that a likelihood-based HMM approach is feasible. We describe a number of extensions of HMMs for animal movement modeling, including more flexible state transition models and individual random effects (fitted in a non-Bayesian framework). In particular we consider so-called hidden semi-Markov models, which may substantially improve the goodness of fit and provide important insights into the behavioral state switching dynamics. To showcase the expediency of these methods, we consider an application of a hierarchical hidden semi-Markov model to multiple bison movement paths
Penalized Clustering of Large Scale Functional Data with Multiple Covariates
In this article, we propose a penalized clustering method for large scale
data with multiple covariates through a functional data approach. In the
proposed method, responses and covariates are linked together through
nonparametric multivariate functions (fixed effects), which have great
flexibility in modeling a variety of function features, such as jump points,
branching, and periodicity. Functional ANOVA is employed to further decompose
multivariate functions in a reproducing kernel Hilbert space and provide
associated notions of main effect and interaction. Parsimonious random effects
are used to capture various correlation structures. The mixed-effect models are
nested under a general mixture model, in which the heterogeneity of functional
data is characterized. We propose a penalized Henderson's likelihood approach
for model-fitting and design a rejection-controlled EM algorithm for the
estimation. Our method selects smoothing parameters through generalized
cross-validation. Furthermore, the Bayesian confidence intervals are used to
measure the clustering uncertainty. Simulation studies and real-data examples
are presented to investigate the empirical performance of the proposed method.
Open-source code is available in the R package MFDA
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
General Design Bayesian Generalized Linear Mixed Models
Linear mixed models are able to handle an extraordinary range of
complications in regression-type analyses. Their most common use is to account
for within-subject correlation in longitudinal data analysis. They are also the
standard vehicle for smoothing spatial count data. However, when treated in
full generality, mixed models can also handle spline-type smoothing and closely
approximate kriging. This allows for nonparametric regression models (e.g.,
additive models and varying coefficient models) to be handled within the mixed
model framework. The key is to allow the random effects design matrix to have
general structure; hence our label general design. For continuous response
data, particularly when Gaussianity of the response is reasonably assumed,
computation is now quite mature and supported by the R, SAS and S-PLUS
packages. Such is not the case for binary and count responses, where
generalized linear mixed models (GLMMs) are required, but are hindered by the
presence of intractable multivariate integrals. Software known to us supports
special cases of the GLMM (e.g., PROC NLMIXED in SAS or glmmML in R) or relies
on the sometimes crude Laplace-type approximation of integrals (e.g., the SAS
macro glimmix or glmmPQL in R). This paper describes the fitting of general
design generalized linear mixed models. A Bayesian approach is taken and Markov
chain Monte Carlo (MCMC) is used for estimation and inference. In this
generalized setting, MCMC requires sampling from nonstandard distributions. In
this article, we demonstrate that the MCMC package WinBUGS facilitates sound
fitting of general design Bayesian generalized linear mixed models in practice.Comment: Published at http://dx.doi.org/10.1214/088342306000000015 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A New Method for Protecting Interrelated Time Series with Bayesian Prior Distributions and Synthetic Data
Organizations disseminate statistical summaries of administrative data via the Web for unrestricted public use. They balance the trade-off between confidentiality protection and inference quality. Recent developments in disclosure avoidance techniques include the incorporation of synthetic data, which capture the essential features of underlying data by releasing altered data generated from a posterior predictive distribution. The United States Census Bureau collects millions of interrelated time series micro-data that are hierarchical and contain many zeros and suppressions. Rule-based disclosure avoidance techniques often require the suppression of count data for small magnitudes and the modification of data based on a small number of entities. Motivated by this problem, we use zero-inflated extensions of Bayesian Generalized Linear Mixed Models (BGLMM) with privacy-preserving prior distributions to develop methods for protecting and releasing synthetic data from time series about thousands of small groups of entities without suppression based on the of magnitudes or number of entities. We find that as the prior distributions of the variance components in the BGLMM become more precise toward zero, confidentiality protection increases and inference quality deteriorates. We evaluate our methodology using a strict privacy measure, empirical differential privacy, and a newly defined risk measure, Probability of Range Identification (PoRI), which directly measures attribute disclosure risk. We illustrate our results with the U.S. Census Bureau’s Quarterly Workforce Indicators
Bayesian correction for covariate measurement error: a frequentist evaluation and comparison with regression calibration
Bayesian approaches for handling covariate measurement error are well
established, and yet arguably are still relatively little used by researchers.
For some this is likely due to unfamiliarity or disagreement with the Bayesian
inferential paradigm. For others a contributory factor is the inability of
standard statistical packages to perform such Bayesian analyses. In this paper
we first give an overview of the Bayesian approach to handling covariate
measurement error, and contrast it with regression calibration (RC), arguably
the most commonly adopted approach. We then argue why the Bayesian approach has
a number of statistical advantages compared to RC, and demonstrate that
implementing the Bayesian approach is usually quite feasible for the analyst.
Next we describe the closely related maximum likelihood and multiple imputation
approaches, and explain why we believe the Bayesian approach to generally be
preferable. We then empirically compare the frequentist properties of RC and
the Bayesian approach through simulation studies. The flexibility of the
Bayesian approach to handle both measurement error and missing data is then
illustrated through an analysis of data from the Third National Health and
Nutrition Examination Survey
Embedding Population Dynamics Models in Inference
Increasing pressures on the environment are generating an ever-increasing
need to manage animal and plant populations sustainably, and to protect and
rebuild endangered populations. Effective management requires reliable
mathematical models, so that the effects of management action can be predicted,
and the uncertainty in these predictions quantified. These models must be able
to predict the response of populations to anthropogenic change, while handling
the major sources of uncertainty. We describe a simple ``building block''
approach to formulating discrete-time models. We show how to estimate the
parameters of such models from time series of data, and how to quantify
uncertainty in those estimates and in numbers of individuals of different types
in populations, using computer-intensive Bayesian methods. We also discuss
advantages and pitfalls of the approach, and give an example using the British
grey seal population.Comment: Published at http://dx.doi.org/10.1214/088342306000000673 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …