13,311 research outputs found

    Phenotypic evolution studied by layered stochastic differential equations

    Full text link
    Time series of cell size evolution in unicellular marine algae (division Haptophyta; Coccolithus lineage), covering 57 million years, are studied by a system of linear stochastic differential equations of hierarchical structure. The data consists of size measurements of fossilized calcite platelets (coccoliths) that cover the living cell, found in deep-sea sediment cores from six sites in the world oceans and dated to irregular points in time. To accommodate biological theory of populations tracking their fitness optima, and to allow potentially interpretable correlations in time and space, the model framework allows for an upper layer of partially observed site-specific population means, a layer of site-specific theoretical fitness optima and a bottom layer representing environmental and ecological processes. While the modeled process has many components, it is Gaussian and analytically tractable. A total of 710 model specifications within this framework are compared and inference is drawn with respect to model structure, evolutionary speed and the effect of global temperature.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS559 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Analysis of the Gibbs sampler for hierarchical inverse problems

    Get PDF
    Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in RN\mathbb{R}^N, with an understanding that refining the discretization, that is increasing NN, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyper-parameters in such a way that they are interpretable in the continuum limit N→∞N \to \infty and so that their values may be compared between different discretization levels; (ii) understanding the efficiency of algorithms for probing the posterior distribution, as a function of large N.N. Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modelling framework based on a Gaussian prior for the unknown field and an inverse-gamma prior for a hyper-parameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinite-dimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as NN increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension; we give numerical experiments which exhibit that our reparametrization prevents the slowing down. Our intuition on the behaviour of the prior hyper-parameter, with and without reparametrization, is sufficiently general to include a broad class of nonlinear inverse problems as well as other families of hyper-priors.Comment: to appear, SIAM/ASA Journal on Uncertainty Quantificatio

    New important developments in small area estimation

    No full text
    The purpose of this paper is to review and discuss some of the new important developments in small area estimation (SAE) methods. Rao (2003) wrote a very comprehensive book, which covers all the main developments in this topic until that time and so the focus of this review is on new developments in the last 7 years. However, to make the review more self contained, I also repeat shortly some of the older developments. The review covers both design based and model-dependent methods with emphasis on the prediction of the area target quantities and the assessment of the prediction error. The style of the paper is similar to the style of my previous review on SAE published in 2002, explaining the new problems investigated and describing the proposed solutions, but without dwelling on theoretical details, which can be found in the original articles. I am hoping that this paper will be useful both to researchers who like to learn more on the research carried out in SAE and to practitioners who might be interested in the application of the new methods

    The Hachemeister Regression Model

    Get PDF
    In this article we will obtain, just like in the case of classical credibility model, a credibility solution in the form of a linear combination of the individual estimate (based on the data of a particular state) and the collective estimate (based on aggregate USA data). Mathematics Subject Classification: 62P05.linearized regression credibility premium, the structural parameters, unbiased estimators.

    Selection of Credibility Regression Models

    Get PDF
    We derive some decision rules to select best predictive regression models in a credibility context, that is, in a "random effects' linear regression model with replicates. In contrast to usual model selection techniques on a collective level, our proposal allows to detect individual structures, even if they disappear in the collective. We give exact, non-asymptotic results for the expected squared error loss for a predictor based on credibility estimation in different models. This involves correct accounting of random model parameters and the study of expected loss for shrinkage estimation. We support the theoretical properties of the new model selectors by a small simulation experimen
    • 

    corecore