2,011 research outputs found

    Likelihood Inference for Models with Unobservables: Another View

    Full text link
    There have been controversies among statisticians on (i) what to model and (ii) how to make inferences from models with unobservables. One such controversy concerns the difference between estimation methods for the marginal means not necessarily having a probabilistic basis and statistical models having unobservables with a probabilistic basis. Another concerns likelihood-based inference for statistical models with unobservables. This needs an extended-likelihood framework, and we show how one such extension, hierarchical likelihood, allows this to be done. Modeling of unobservables leads to rich classes of new probabilistic models from which likelihood-type inferences can be made naturally with hierarchical likelihood.Comment: This paper discussed in: [arXiv:1010.0804], [arXiv:1010.0807], [arXiv:1010.0810]. Rejoinder at [arXiv:1010.0814]. Published in at http://dx.doi.org/10.1214/09-STS277 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian inference for quantiles and conditional means in log-normal models

    Get PDF
    The main topic of the thesis is the proper execution of a Bayesian inference if log-normality is assumed for data. In fact, it is known that a particular care is required in this context, since the most common prior distributions for the variance in log scale produce posteriors for the log-normal mean which do not have finite moments. Hence, classical summary measures of the posterior such as expectation and variance cannot be computed for these distributions. The thesis is aimed at proposing solutions to carry out Bayesian inference inside a mathematically coherent framework, focusing on the estimation of two quantities: log-normal quantiles (first part of the thesis) and conditioned expectations under a general log-normal linear mixed model (second part of the thesis). Moreover, in the latter section, a further investigation on a unit-level small area models is presented, considering the problem of estimating the well-known log-transformed Battese, Harter and Fuller model in the hierarchical Bayes context. Once the existence conditions for the moments of the target functionals posterior are proved, new strategies to specify prior distributions are suggested. Then, the frequentist properties of the deduced Bayes estimators and credible intervals are evaluated through accurate simulations studies: it resulted that the proposed methodologies improve the Bayesian estimates under naive prior settings and are satisfactorily competitive with the frequentist solutions available in the literature. To conclude, applications of the developed inferential strategies are illustrated on real datasets. The work is completed by the implementation of an R package named BayesLN which allows the users to easily carry out Bayesian inference for log-normal data

    Cleaning large correlation matrices: tools from random matrix theory

    Full text link
    This review covers recent results concerning the estimation of large covariance matrices using tools from Random Matrix Theory (RMT). We introduce several RMT methods and analytical techniques, such as the Replica formalism and Free Probability, with an emphasis on the Marchenko-Pastur equation that provides information on the resolvent of multiplicatively corrupted noisy matrices. Special care is devoted to the statistics of the eigenvectors of the empirical correlation matrix, which turn out to be crucial for many applications. We show in particular how these results can be used to build consistent "Rotationally Invariant" estimators (RIE) for large correlation matrices when there is no prior on the structure of the underlying process. The last part of this review is dedicated to some real-world applications within financial markets as a case in point. We establish empirically the efficacy of the RIE framework, which is found to be superior in this case to all previously proposed methods. The case of additively (rather than multiplicatively) corrupted noisy matrices is also dealt with in a special Appendix. Several open problems and interesting technical developments are discussed throughout the paper.Comment: 165 pages, article submitted to Physics Report

    Bayesian Regularisation in Structured Additive Regression Models for Survival Data

    Get PDF
    During recent years, penalized likelihood approaches have attracted a lot of interest both in the area of semiparametric regression and for the regularization of high-dimensional regression models. In this paper, we introduce a Bayesian formulation that allows to combine both aspects into a joint regression model with a focus on hazard regression for survival times. While Bayesian penalized splines form the basis for estimating nonparametric and flexible time-varying effects, regularization of high-dimensional covariate vectors is based on scale mixture of normals priors. This class of priors allows to keep a (conditional) Gaussian prior for regression coefficients on the predictor stage of the model but introduces suitable mixture distributions for the Gaussian variance to achieve regularization. This scale mixture property allows to device general and adaptive Markov chain Monte Carlo simulation algorithms for fitting a variety of hazard regression models. In particular, unifying algorithms based on iteratively weighted least squares proposals can be employed both for regularization and penalized semiparametric function estimation. Since sampling based estimates do no longer have the variable selection property well-known for the Lasso in frequentist analyses, we additionally consider spike and slab priors that introduce a further mixing stage that allows to separate between influential and redundant parameters. We demonstrate the different shrinkage properties with three simulation settings and apply the methods to the PBC Liver dataset
    • …
    corecore