198 research outputs found

    A Primer on High-Throughput Computing for Genomic Selection

    Get PDF
    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans

    Accuracy of Genome-Enabled Prediction in a Dairy Cattle Population using Different Cross-Validation Layouts

    Get PDF
    The impact of extent of genetic relatedness on accuracy of genome-enabled predictions was assessed using a dairy cattle population and alternative cross-validation (CV) strategies were compared. The CV layouts consisted of training and testing sets obtained from either random allocation of individuals (RAN) or from a kernel-based clustering of individuals using the additive relationship matrix, to obtain two subsets that were as unrelated as possible (UNREL), as well as a layout based on stratification by generation (GEN). The UNREL layout decreased the average genetic relationships between training and testing animals but produced similar accuracies to the RAN design, which were about 15% higher than in the GEN setting. Results indicate that the CV structure can have an important effect on the accuracy of whole-genome predictions. However, the connection between average genetic relationships across training and testing sets and the estimated predictive ability is not straightforward, and may depend also on the kind of relatedness that exists between the two subsets and on the heritability of the trait. For high heritability traits, close relatives such as parents and full-sibs make the greatest contributions to accuracy, which can be compensated by half-sibs or grandsires in the case of lack of close relatives. However, for the low heritability traits the inclusion of close relatives is crucial and including more relatives of various types in the training set tends to lead to greater accuracy. In practice, CV designs should resemble the intended use of the predictive models, e.g., within or between family predictions, or within or across generation predictions, such that estimation of predictive ability is consistent with the actual application to be considered

    Predictive ability of genome-assisted statistical models under various forms of gene action

    Get PDF
    Recent work has suggested that the performance of prediction models for complex traits may depend on the architecture of the target traits. Here we compared several prediction models with respect to their ability of predicting phenotypes under various statistical architectures of gene action: (1) purely additive, (2) additive and dominance, (3) additive, dominance, and two-locus epistasis, and (4) purely epistatic settings. Simulation and a real chicken dataset were used. Fourteen prediction models were compared: BayesA, BayesB, BayesC, Bayesian LASSO, Bayesian ridge regression, elastic net, genomic best linear unbiased prediction, a Gaussian process, LASSO, random forests, reproducing kernel Hilbert spaces regression, ridge regression (best linear unbiased prediction), relevance vector machines, and support vector machines. When the trait was under additive gene action, the parametric prediction models outperformed non-parametric ones. Conversely, when the trait was under epistatic gene action, the non-parametric prediction models provided more accurate predictions. Thus, prediction models must be selected according to the most probably underlying architecture of traits. In the chicken dataset examined, most models had similar prediction performance. Our results corroborate the view that there is no universally best prediction models, and that the development of robust prediction models is an important research objective

    Reassessing Design and Analysis of two-Colour Microarray Experiments Using Mixed Effects Models

    Get PDF
    Gene expression microarray studies have led to interesting experimental design and statistical analysis challenges. The comparison of expression profiles across populations is one of the most common objectives of microarray experiments. In this manuscript we review some issues regarding design and statistical analysis for two-colour microarray platforms using mixed linear models, with special attention directed towards the different hierarchical levels of replication and the consequent effect on the use of appropriate error terms for comparing experimental groups. We examine the traditional analysis of variance (ANOVA) models proposed for microarray data and their extensions to hierarchically replicated experiments. In addition, we discuss a mixed model methodology for power and efficiency calculations of different microarray experimental designs

    Genome-Wide Linkage Analysis of Global Gene Expression in Loin Muscle Tissue Identifies Candidate Genes in Pigs

    Get PDF
    BACKGROUND: Nearly 6,000 QTL have been reported for 588 different traits in pigs, more than in any other livestock species. However, this effort has translated into only a few confirmed causative variants. A powerful strategy for revealing candidate genes involves expression QTL (eQTL) mapping, where the mRNA abundance of a set of transcripts is used as the response variable for a QTL scan. METHODOLOGY/PRINCIPAL FINDINGS: We utilized a whole genome expression microarray and an F(2) pig resource population to conduct a global eQTL analysis in loin muscle tissue, and compared results to previously inferred phenotypic QTL (pQTL) from the same experimental cross. We found 62 unique eQTL (FDR <10%) and identified 3 gene networks enriched with genes subject to genetic control involved in lipid metabolism, DNA replication, and cell cycle regulation. We observed strong evidence of local regulation (40 out of 59 eQTL with known genomic position) and compared these eQTL to pQTL to help identify potential candidate genes. Among the interesting associations, we found aldo-keto reductase 7A2 (AKR7A2) and thioredoxin domain containing 12 (TXNDC12) eQTL that are part of a network associated with lipid metabolism and in turn overlap with pQTL regions for marbling, % intramuscular fat (% fat) and loin muscle area on Sus scrofa (SSC) chromosome 6. Additionally, we report 13 genomic regions with overlapping eQTL and pQTL involving 14 local eQTL. CONCLUSIONS/SIGNIFICANCE: Results of this analysis provide novel candidate genes for important complex pig phenotypes

    Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    Get PDF
    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable
    corecore