193,349 research outputs found
Bayesian Inference under Cluster Sampling with Probability Proportional to Size
Cluster sampling is common in survey practice, and the corresponding
inference has been predominantly design-based. We develop a Bayesian framework
for cluster sampling and account for the design effect in the outcome modeling.
We consider a two-stage cluster sampling design where the clusters are first
selected with probability proportional to cluster size, and then units are
randomly sampled inside selected clusters. Challenges arise when the sizes of
nonsampled cluster are unknown. We propose nonparametric and parametric
Bayesian approaches for predicting the unknown cluster sizes, with this
inference performed simultaneously with the model for survey outcome.
Simulation studies show that the integrated Bayesian approach outperforms
classical methods with efficiency gains. We use Stan for computing and apply
the proposal to the Fragile Families and Child Wellbeing study as an
illustration of complex survey inference in health surveys
Recommended from our members
Fast, non-monte-carlo estimation of transient performance variation due to device mismatch
This paper describes an efficient way of simulating the effects of device random mismatch on circuit transient characteristics, such as variations in delay or in frequency. The proposed method models DC random offsets as equivalent AC pseudo-noises and leverages the fast, linear periodically time-varying (LPTV) noise analysis available from RF circuit simulators. Therefore, the method can be considered as an extension to DC match analysis and offers a large speed-up compared to the traditional Monte-Carlo analysis. Although the assumed linear perturbation model is valid only for small variations, it enables easy ways to estimate correlations among variations and identify the most sensitive design parameters to mismatch, all at no additional simulation cost. Three benchmarks measuring the variations in the input offset voltage of a clocked comparator, the delay of a logic path, and the frequency of an oscillator demonstrate the speed improvement of about 100-1000x compared to a 1000-point Monte-Carlo method
Bayesian Item Response Modeling in R with brms and Stan
Item Response Theory (IRT) is widely applied in the human sciences to model
persons' responses on a set of items measuring one or more latent constructs.
While several R packages have been developed that implement IRT models, they
tend to be restricted to respective prespecified classes of models. Further,
most implementations are frequentist while the availability of Bayesian methods
remains comparably limited. We demonstrate how to use the R package brms
together with the probabilistic programming language Stan to specify and fit a
wide range of Bayesian IRT models using flexible and intuitive multilevel
formula syntax. Further, item and person parameters can be related in both a
linear or non-linear manner. Various distributions for categorical, ordinal,
and continuous responses are supported. Users may even define their own custom
response distribution for use in the presented framework. Common IRT model
classes that can be specified natively in the presented framework include 1PL
and 2PL logistic models optionally also containing guessing parameters, graded
response and partial credit ordinal models, as well as drift diffusion models
of response times coupled with binary decisions. Posterior distributions of
item and person parameters can be conveniently extracted and post-processed.
Model fit can be evaluated and compared using Bayes factors and efficient
cross-validation procedures.Comment: 54 pages, 16 figures, 3 table
- …