473 research outputs found

    Discussion of "Estimating Random Effects via Adjustment for Density Maximization" by C. Morris and R. Tang

    Full text link
    Discussion of "Estimating Random Effects via Adjustment for Density Maximization" by C. Morris and R. Tang [arXiv:1108.3234]Comment: Published in at http://dx.doi.org/10.1214/11-STS349A the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Introducing Monte Carlo Methods with R Solutions to Odd-Numbered Exercises

    Full text link
    This is the solution manual to the odd-numbered exercises in our book "Introducing Monte Carlo Methods with R", published by Springer Verlag on December 10, 2009, and made freely available to everyone.Comment: 87 pages, 11 figure

    Comment: On Random Scan Gibbs Samplers

    Full text link
    Comment on ``On Random Scan Gibbs Samplers'' [arXiv:0808.3852]Comment: Published in at http://dx.doi.org/10.1214/08-STS252B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimation in Dirichlet random effects models

    Full text link
    We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be an improvement, in terms of operator norm and efficiency, over other commonly used MCMC algorithms. We also investigate methods for the estimation of the precision parameter of the Dirichlet process, finding that maximum likelihood may not be desirable, but a posterior mode is a reasonable approach. Examples are given to show how these models perform on real data. Our results complement both the theoretical basis of the Dirichlet process nonparametric prior and the computational work that has been done to date.Comment: Published in at http://dx.doi.org/10.1214/09-AOS731 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Shrinkage Confidence Procedures

    Full text link
    The possibility of improving on the usual multivariate normal confidence was first discussed in Stein (1962). Using the ideas of shrinkage, through Bayesian and empirical Bayesian arguments, domination results, both analytic and numerical, have been obtained. Here we trace some of the developments in confidence set estimation.Comment: Published in at http://dx.doi.org/10.1214/10-STS319 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Consistency of objective Bayes factors as the model dimension grows

    Full text link
    In the class of normal regression models with a finite number of regressors, and for a wide class of prior distributions, a Bayesian model selection procedure based on the Bayes factor is consistent [Casella and Moreno J. Amer. Statist. Assoc. 104 (2009) 1261--1271]. However, in models where the number of parameters increases as the sample size increases, properties of the Bayes factor are not totally understood. Here we study consistency of the Bayes factors for nested normal linear models when the number of regressors increases with the sample size. We pay attention to two successful tools for model selection [Schwarz Ann. Statist. 6 (1978) 461--464] approximation to the Bayes factor, and the Bayes factor for intrinsic priors [Berger and Pericchi J. Amer. Statist. Assoc. 91 (1996) 109--122, Moreno, Bertolino and Racugno J. Amer. Statist. Assoc. 93 (1998) 1451--1460]. We find that the the Schwarz approximation and the Bayes factor for intrinsic priors are consistent when the rate of growth of the dimension of the bigger model is O(nb)O(n^b) for b<1b<1. When b=1b=1 the Schwarz approximation is always inconsistent under the alternative while the Bayes factor for intrinsic priors is consistent except for a small set of alternative models which is characterized.Comment: Published in at http://dx.doi.org/10.1214/09-AOS754 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Explaining the saddlepoint approximation

    Get PDF
    Saddlepoint approximations are powerful tools for obtaining accurate expressions for densities and distribution functions. \Ve give an elementary motivation and explanation of saddlepoint approximation techniques, stressing the connection with the familiar Taylor series expansions and the Laplace approximation of integrals. Saddlepoint methods are applied to the convolution of simple densities and, using the Fourier inversion formula, the saddlepoint approximation to the density of a random variable is derived. \Ve then apply the method to densities of sample means of iid random variables, and also demonstrate the technique for approximating the density of a maximum likelihood estimator in exponential families

    Testing for the existence of clusters

    Get PDF
    Detecting and determining clusters present in a certain sample has been an important concern, among researchers from different fields, for a long time. In particular, assessing whether the clusters are statistically significant, is a question that has been asked by a number of experimenters. Recently, this question arose again in a study in maize genetics, where determining the significance of clusters is crucial as a primary step in the identification of a genome-wide collection of mutants that may affect the kernel composition. Although several efforts have been made in this direction, not much has been done with the aim of developing an actual hypothesis test in order to assess the significance of clusters. In this paper, we propose a new methodology that allows the examination of the hypothesis test H0 : =1 vs. H1 : =k, where denotes the number of clusters present in a certain population. Our procedure, based on Bayesian tools, permits us to obtain closed form expressions for the posterior probabilities corresponding to the null hypothesis. From here, we calibrate our results by estimating the frequentist null distribution of the posterior probabilities in order to obtain the p-values associated with the observed posterior probabilities. In most cases, actual evaluation of the posterior probabilities is computationally intensive and several algorithms have been discussed in the literature. Here, we propose a simple estimation procedure, based on MCMC techniques, that permits an efficient and easily implementable evaluation of the test. Finally, we present simulation studies that support our conclusions, and we apply our method to the analysis of NIR spectroscopy data coming from the genetic study that motivated this work.Peer Reviewe

    Consistency of Bayesian procedures for variable selection

    Full text link
    It has long been known that for the comparison of pairwise nested models, a decision based on the Bayes factor produces a consistent model selector (in the frequentist sense). Here we go beyond the usual consistency for nested pairwise models, and show that for a wide class of prior distributions, including intrinsic priors, the corresponding Bayesian procedure for variable selection in normal regression is consistent in the entire class of normal linear models. We find that the asymptotics of the Bayes factors for intrinsic priors are equivalent to those of the Schwarz (BIC) criterion. Also, recall that the Jeffreys--Lindley paradox refers to the well-known fact that a point null hypothesis on the normal mean parameter is always accepted when the variance of the conjugate prior goes to infinity. This implies that some limiting forms of proper prior distributions are not necessarily suitable for testing problems. Intrinsic priors are limits of proper prior distributions, and for finite sample sizes they have been proved to behave extremely well for variable selection in regression; a consequence of our results is that for intrinsic priors Lindley's paradox does not arise.Comment: Published in at http://dx.doi.org/10.1214/08-AOS606 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …