25 research outputs found

    Prioritizing animals for dense genotyping in order to impute missing genotypes of sparsely genotyped animals

    Get PDF
    International audienceBackground Genotyping accounts for a substantial part of the cost of genomic selection (GS). Using both dense and sparse SNP chips, together with imputation of missing genotypes, can reduce these costs. The aim of this study was to identify the set of candidates that are most important for dense genotyping, when they are used to impute the genotypes of sparsely genotyped animals. In a real pig pedigree, the 2500 most recently born pigs of the last generation, i.e. the target animals, were used for sparse genotyping. Their missing genotypes were imputed using either Beagle or LDMIP from T densely genotyped candidates chosen from the whole pedigree. A new optimization method was derived to identify the best animals for dense genotyping, which minimized the conditional genetic variance of the target animals, using either the pedigree-based relationship matrix (MCA), or a genotypic relationship matrix based on sparse marker genotypes (MCG). These, and five other methods for selecting the T animals were compared, using T = 100 or 200 animals, SNP genotypes were obtained assuming Ne =100 or 200, and MAF thresholds set to D = 0.01, 0.05 or 0.10. The performances of the methods were compared using the following criteria: call rate of true genotypes, accuracy of genotype prediction, and accuracy of genomic evaluations using the imputed genotypes.ResultsFor all criteria, MCA and MCG performed better than other selection methods, significantly so for all methods other than selection of sires with the largest numbers of offspring. Methods that choose animals that have the closest average relationship or contribution to the target population gave the lowest accuracy of imputation, in some cases worse than random selection, and should be avoided in practice.Conclusion Minimization of the conditional variance of the genotypes in target animals provided an effective optimization procedure for prioritizing animals for genotyping or sequencing

    Using the Pareto principle in genome-wide breeding value estimation

    Get PDF
    Genome-wide breeding value (GWEBV) estimation methods can be classified based on the prior distribution assumptions of marker effects. Genome-wide BLUP methods assume a normal prior distribution for all markers with a constant variance, and are computationally fast. In Bayesian methods, more flexible prior distributions of SNP effects are applied that allow for very large SNP effects although most are small or even zero, but these prior distributions are often also computationally demanding as they rely on Monte Carlo Markov chain sampling. In this study, we adopted the Pareto principle to weight available marker loci, i.e., we consider that x% of the loci explain (100 - x)% of the total genetic variance. Assuming this principle, it is also possible to define the variances of the prior distribution of the 'big' and 'small' SNP. The relatively few large SNP explain a large proportion of the genetic variance and the majority of the SNP show small effects and explain a minor proportion of the genetic variance. We name this method MixP, where the prior distribution is a mixture of two normal distributions, i.e. one with a big variance and one with a small variance. Simulation results, using a real Norwegian Red cattle pedigree, show that MixP is at least as accurate as the other methods in all studied cases. This method also reduces the hyper-parameters of the prior distribution from 2 (proportion and variance of SNP with big effects) to 1 (proportion of SNP with big effects), assuming the overall genetic variance is known. The mixture of normal distribution prior made it possible to solve the equations iteratively, which greatly reduced computation loads by two orders of magnitude. In the era of marker density reaching million(s) and whole-genome sequence data, MixP provides a computationally feasible Bayesian method of analysis

    Within- and across-breed genomic prediction using whole-genome sequence and single nucleotide polymorphism panels

    Get PDF
    International audienceBackground Currently, genomic prediction in cattle is largely based on panels of about 54k single nucleotide polymorphisms (SNPs). However with the decreasing costs of and current advances in next-generation sequencing technologies, whole-genome sequence (WGS) data on large numbers of individuals is within reach. Availability of such data provides new opportunities for genomic selection, which need to be explored.MethodsThis simulation study investigated how much predictive ability is gained by using WGS data under scenarios with QTL (quantitative trait loci) densities ranging from 45 to 132 QTL/Morgan and heritabilities ranging from 0.07 to 0.30, compared to different SNP densities, with emphasis on divergent dairy cattle breeds with small populations. The relative performances of best linear unbiased prediction (SNP-BLUP) and of a variable selection method with a mixture of two normal distributions (MixP) were also evaluated. Genomic predictions were based on within-population, across-population, and multi-breed reference populations.ResultsThe use of WGS data for within-population predictions resulted in small to large increases in accuracy for low to moderately heritable traits. Depending on heritability of the trait, and on SNP and QTL densities, accuracy increased by up to 31 %. The advantage of WGS data was more pronounced (7 to 92 % increase in accuracy depending on trait heritability, SNP and QTL densities, and time of divergence between populations) with a combined reference population and when using MixP. While MixP outperformed SNP-BLUP at 45 QTL/Morgan, SNP-BLUP was as good as MixP when QTL density increased to 132 QTL/Morgan.ConclusionsOur results show that, genomic predictions in numerically small cattle populations would benefit from a combination of WGS data, a multi-breed reference population, and a variable selection method

    Error entropy model based determination of minimum detactable deformation magnitude of terrestrial laser scanning

    No full text
    Deformation is typically estimated by comparing scans of terrestrial laser scanner (TLS) over the same area but at different time instants. However, such a method can only estimate the difference between two successive surveys, which may be caused by instrumental error and registration error instead of the real deformation, affecting the reliability of deformation monitoring. In order to improve the reliability of TLS-based deformation monitoring, it is necessary to estimate the errors in TLS measurements and determine the precursory displacements that can be detected. In this paper, the error entropy model is exploited for inspecting the threshold value in deformation monitoring, i.e. the minimum detectable deformation magnitude of a TLS. The experimental results demonstrate that deformation greater than the threshold calculated by the proposed error entropy based method can be reliably detected.4 page(s

    Nanomaterials for oncotherapies targeting the hallmarks of cancer

    No full text
    An increasing amount of evidence has demonstrated the diverse functionalities of nanomaterials in oncotherapies such as drug delivery, imaging, and killing cancer cells. This review aims to offer an authoritative guide for the development of nanomaterial-based oncotherapies and shed light on emerging yet understudied hallmarks of cancer where nanoparticles can help improve cancer control. With this aim, three nanomaterials, i.e. those based on gold, graphene, and liposome, were selected to represent and encompass metal inorganic, nonmetal inorganic, and organic nanomaterials, and four oncotherapies, i.e. phototherapies, immunotherapies, cancer stem cell therapies, and metabolic therapies, were characterized based on the differential hallmarks of cancer that they target. We also view physical plasma as a cocktail of reactive species and carrier of nanomaterials and focus on its roles in targeting the hallmarks of cancer provided with its unique traits and ability to selectively induce epigenetic and genetic modulations in cancer cells that halt tumor initiation and progression. This review provides a clear understanding of how the physico-chemical features of particles at the nanoscale contribute alone or create synergistic effects with current treatment modalities in combating each of the hallmarks of cancer that ultimately leads to desired therapeutic outcomes and shapes the toolbox for cancer control.</p

    Theoretical and Empirical Power of Regression and Maximum-Likelihood Methods to Map Quantitative Trait Loci in General Pedigrees

    No full text
    Both theoretical calculations and simulation studies have been used to compare and contrast the statistical power of methods for mapping quantitative trait loci (QTLs) in simple and complex pedigrees. A widely used approach in such studies is to derive or simulate the expected mean test statistic under the alternative hypothesis of a segregating QTL and to equate a larger mean test statistic with larger power. In the present study, we show that, even when the test statistic under the null hypothesis of no linkage follows a known asymptotic distribution (the standard being χ(2)), it cannot be assumed that the distribution under the alternative hypothesis is noncentral χ(2). Hence, mean test statistics cannot be used to indicate power differences, and a comparison between methods that are based on simulated average test statistics may lead to the wrong conclusion. We illustrate this important finding, through simulations and analytical derivations, for a recently proposed new regression method for the analysis of general pedigrees to map quantitative trait loci. We show that this regression method is not necessarily more powerful nor computationally more efficient than a maximum-likelihood variance-component approach. We advocate the use of empirical power to compare trait-mapping methods

    State-of-the-art in carbides/carbon composites for electromagnetic wave absorption

    No full text
    Summary: Electromagnetic wave absorbing materials (EWAMs) have made great progress in the past decades, and are playing an increasingly important role in radiation prevention and antiradar detection due to their essential attenuation toward incident EM wave. With the flourish of nanotechnology, the design of high-performance EWAMs is not just dependent on the intrinsic characteristics of single-component medium, but pays more attention to the synergistic effects from different components to generate rich loss mechanisms. Among various candidates, carbides and carbon materials are usually labeled with the features of chemical stability, low density, tunable dielectric property, and diversified morphology/microstructure, and thus the combination of carbides and carbon materials will be a promising way to acquire new EWAMs with good practical application prospects. In this review, we introduce EM loss mechanisms related to dielectric composites, and then highlight the state-of-the-art progress in carbides/carbon composites as high-performance EWAMs, including silicon carbide/carbon, MXene/carbon, molybdenum carbide/carbon, as well as some uncommon carbides/carbon composites and multicomponent composites. The critical information regarding composition optimization, structural engineering, performance reinforcement, and structure-function relationship are discussed in detail. In addition, some challenges and perspectives for the development of carbides/carbon composites are also proposed after comparing the performance of some representative composites
    corecore