13 research outputs found

    Comment on Article by Berger, Bernardo, and Sun

    Full text link
    Discussion of Overall Objective Priors by James O. Berger, Jose M. Bernardo, Dongchu Sun [arXiv:1504.02689].Comment: Published at http://dx.doi.org/10.1214/14-BA938 in the Bayesian Analysis (http://projecteuclid.org/euclid.ba) by the International Society of Bayesian Analysis (http://bayesian.org/

    A TWO-COMPONENT NORMAL MIXTURE ALTERNATIVE TO THE FAY-HERRIOT MODEL

    Get PDF
    This article considers a robust hierarchical Bayesian approach to deal with random effects of small area means when some of these effects assume extreme values, resulting in outliers. In the presence of outliers, the standard Fay-Herriot model, used for modeling area-level data, under normality assumptions of random effects may overestimate the random effects variance, thus providing less than ideal shrinkage towards the synthetic regression predictions and inhibiting the borrowing of information. Even a small number of substantive outliers of random effects results in a large estimate of the random effects variance in the Fay-Herriot model, thereby achieving little shrinkage to the synthetic part of the model or little reduction in the posterior variance associated with the regular Bayes estimator for any of the small areas. While the scale mixture of normal distributions with a known mixing distribution for the random effects has been found to be effective in the presence of outliers, the solution depends on the mixing distribution. As a possible alternative solution to the problem, a two-component normal mixture model has been proposed, based on non-informative priors on the model variance parameters, regression oefficients and the mixing probability. Data analysis and simulation studies based on real, simulated and synthetic data show an advantage of the proposed method over the standard Bayesian Fay-Herriot solution derived under normality of random effects

    An alternative derivation of the distributions of the maximum likelihood estimators of the parameters in an inverse Gaussian distribution

    No full text
    We provide a simpler derivation of the sampling properties of the maximum likelihood estimators of the parameters in an inverse Gaussian distribution. Copyright 2005, Oxford University Press.

    Probability matching priors higher order asymptotics

    No full text

    Pitman's measure of closeness for symmetric stable distributions

    No full text
    This paper considers symmetric stable distributions with different exponents [gamma] (0Pitman-closeness measure of concentration stable distributions symmetric sample averages weighted normal Cauchy

    Small Area Estimation With Uncertain Random Effects

    No full text
    <p>Random effects models play an important role in model-based small area estimation. Random effects account for any lack of fit of a regression model for the population means of small areas on a set of explanatory variables. In a recent article, Datta, Hall, and Mandal showed that if the random effects can be dispensed with via a suitable test, then the model parameters and the small area means may be estimated with substantially higher accuracy. The work of Datta, Hall, and Mandal is most useful when the number of small areas, <i>m</i>, is moderately large. For large <i>m</i>, the null hypothesis of no random effects will likely be rejected. Rejection of the null hypothesis is usually caused by a few large residuals signifying a departure of the direct estimator from the synthetic regression estimator. As a flexible alternative to the Fay–Herriot random effects model and the approach in Datta, Hall, and Mandal, in this article we consider a mixture model for random effects. It is reasonably expected that small areas with population means explained adequately by covariates have little model error, and the other areas with means not adequately explained by covariates will require a random component added to the regression model. This model is a useful alternative to the usual random effects model and the data determine the extent of lack of fit of the regression model for a particular small area, and include a random effect if needed. Unlike the Datta, Hall, and Mandal approach which recommends excluding random effects from all small areas if a test of null hypothesis of no random effects is not rejected, the present model is more flexible. We used this mixture model to estimate poverty ratios for 5–17-year-old-related children for the 50 U.S. states and Washington, DC. This application is motivated by the SAIPE project of the U.S. Census Bureau. We empirically evaluated the accuracy of the direct estimates and the estimates obtained from our mixture model and the Fay–Herriot random effects model. These empirical evaluations and a simulation study, in conjunction with a lower posterior variance of the new estimates, show that the new estimates are more accurate than both the frequentist and the Bayes estimates resulting from the standard Fay–Herriot model. Supplementary materials for this article are available online.</p

    Estimation, prediction and the Stein phenomenon under divergence loss

    No full text
    We consider two problems: (1) estimate a normal mean under a general divergence loss introduced in [S. Amari, Differential geometry of curved exponential families -- curvatures and information loss, Ann. Statist. 10 (1982) 357-387] and [N. Cressie, T.R.C. Read, Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B. 46 (1984) 440-464] and (2) find a predictive density of a new observation drawn independently of observations sampled from a normal distribution with the same mean but possibly with a different variance under the same loss. The general divergence loss includes as special cases both the Kullback-Leibler and Bhattacharyya-Hellinger losses. The sample mean, which is a Bayes estimator of the population mean under this loss and the improper uniform prior, is shown to be minimax in any arbitrary dimension. A counterpart of this result for predictive density is also proved in any arbitrary dimension. The admissibility of these rules holds in one dimension, and we conjecture that the result is true in two dimensions as well. However, the general Baranchick [A.J. Baranchick, a family of minimax estimators of the mean of a multivariate normal distribution, Ann. Math. Statist. 41 (1970) 642-645] class of estimators, which includes the James-Stein estimator and the Strawderman [W.E. Strawderman, Proper Bayes minimax estimators of the multivariate normal mean, Ann. Math. Statist. 42 (1971) 385-388] class of estimators, dominates the sample mean in three or higher dimensions for the estimation problem. An analogous class of predictive densities is defined and any member of this class is shown to dominate the predictive density corresponding to a uniform prior in three or higher dimensions. For the prediction problem, in the special case of Kullback-Leibler loss, our results complement to a certain extent some of the recent important work of Komaki [F. Komaki, A shrinkage predictive distribution for multivariate normal observations, Biometrika 88 (2001) 859-864] and George, Liang and Xu [E.I. George, F. Liang, X. Xu, Improved minimax predictive densities under Kullbak-Leibler loss, Ann. Statist. 34 (2006) 78-92]. While our proposed approach produces a general class of predictive densities (not necessarily Bayes, but not excluding Bayes predictors) dominating the predictive density under a uniform prior. We show also that various modifications of the James-Stein estimator continue to dominate the sample mean, and by the duality of estimation and predictive density results which we will show, similar results continue to hold for the prediction problem as well.62C15 62C20 62C12 Admissibility Baranchick class Bhattacharyya-Hellinger loss Empirical Bayes Kullback-Leibler loss Minimaxity
    corecore