106,506 research outputs found

    Restricted best linear unbiased prediction using canonical transformation

    Get PDF

    BLUP for Merino breeding

    Get PDF
    Best Linear Unbiased Prediction (BLUP) experiences of farmers into their genetic breeding programs

    NEAREST NEIGHBOR ADJUSTED BEST LINEAR UNBIASED PREDICTION IN FIELD EXPERIMENTS

    Get PDF
    In field experiments with large numbers of treatments, inference can be affected by 1) local variation, and 2) method of analysis . The standard approach to local, or spatial, variation in the design of experiments is blocking. While the randomized complete block design is obviously unsuitable for experiments with large numbers of treatments, incomplete block designs - even apparently well-chosen ones - may be only partial solutions. Various nearest neighbor adjustment procedures are an alternative approach to spatial variation . Treatment effects are usually estimated using standard linear model methods. That is, linear unbiased estimates are obtained using ordinary least squares or, for example when nearest neighbor adjustments are used, generalized least squares. This follows from regarding treatment as a fixed effect. However, when there are large numbers of treatments, regarding treatment as a random effect and obtaining best linear unbiased predictors (BLUP) can improve precision . Nearest neighbor methods and BLUP have had largely parallel development. The purpose of this paper is to put them together

    Adding gene transcripts into genomic prediction improves accuracy and reveals sampling time dependence.

    Get PDF
    Recent developments allowed generating multiple high-quality \u27omics\u27 data that could increase the predictive performance of genomic prediction for phenotypes and genetic merit in animals and plants. Here, we have assessed the performance of parametric and nonparametric models that leverage transcriptomics in genomic prediction for 13 complex traits recorded in 478 animals from an outbred mouse population. Parametric models were implemented using the best linear unbiased prediction, while nonparametric models were implemented using the gradient boosting machine algorithm. We also propose a new model named GTCBLUP that aims to remove between-omics-layer covariance from predictors, whereas its counterpart GTBLUP does not do that. While gradient boosting machine models captured more phenotypic variation, their predictive performance did not exceed the best linear unbiased prediction models for most traits. Models leveraging gene transcripts captured higher proportions of the phenotypic variance for almost all traits when these were measured closer to the moment of measuring gene transcripts in the liver. In most cases, the combination of layers was not able to outperform the best single-omics models to predict phenotypes. Using only gene transcripts, the gradient boosting machine model was able to outperform best linear unbiased prediction for most traits except body weight, but the same pattern was not observed when using both single nucleotide polymorphism genotypes and gene transcripts. Although the GTCBLUP model was not able to produce the most accurate phenotypic predictions, it showed the highest accuracies for breeding values for 9 out of 13 traits. We recommend using the GTBLUP model for prediction of phenotypes and using the GTCBLUP for prediction of breeding values

    Predictive ability of genome-assisted statistical models under various forms of gene action

    Get PDF
    Recent work has suggested that the performance of prediction models for complex traits may depend on the architecture of the target traits. Here we compared several prediction models with respect to their ability of predicting phenotypes under various statistical architectures of gene action: (1) purely additive, (2) additive and dominance, (3) additive, dominance, and two-locus epistasis, and (4) purely epistatic settings. Simulation and a real chicken dataset were used. Fourteen prediction models were compared: BayesA, BayesB, BayesC, Bayesian LASSO, Bayesian ridge regression, elastic net, genomic best linear unbiased prediction, a Gaussian process, LASSO, random forests, reproducing kernel Hilbert spaces regression, ridge regression (best linear unbiased prediction), relevance vector machines, and support vector machines. When the trait was under additive gene action, the parametric prediction models outperformed non-parametric ones. Conversely, when the trait was under epistatic gene action, the non-parametric prediction models provided more accurate predictions. Thus, prediction models must be selected according to the most probably underlying architecture of traits. In the chicken dataset examined, most models had similar prediction performance. Our results corroborate the view that there is no universally best prediction models, and that the development of robust prediction models is an important research objective

    Parametric bootstrap approximation to the distribution of EBLUP and related prediction intervals in linear mixed models

    Full text link
    Empirical best linear unbiased prediction (EBLUP) method uses a linear mixed model in combining information from different sources of information. This method is particularly useful in small area problems. The variability of an EBLUP is traditionally measured by the mean squared prediction error (MSPE), and interval estimates are generally constructed using estimates of the MSPE. Such methods have shortcomings like under-coverage or over-coverage, excessive length and lack of interpretability. We propose a parametric bootstrap approach to estimate the entire distribution of a suitably centered and scaled EBLUP. The bootstrap histogram is highly accurate, and differs from the true EBLUP distribution by only O(d3n−3/2)O(d^3n^{-3/2}), where dd is the number of parameters and nn the number of observations. This result is used to obtain highly accurate prediction intervals. Simulation results demonstrate the superiority of this method over existing techniques of constructing prediction intervals in linear mixed models.Comment: Published in at http://dx.doi.org/10.1214/07-AOS512 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore