12,400 research outputs found

    Novel Methods for Multivariate Ordinal Data applied to Genetic Diplotypes, Genomic Pathways, Risk Profiles, and Pattern Similarity

    Get PDF
    Introduction: Conventional statistical methods for multivariate data (e.g., discriminant/regression) are based on the (generalized) linear model, i.e., the data are interpreted as points in a Euclidian space of independent dimensions. The dimensionality of the data is then reduced by assuming the components to be related by a specific function of known type (linear, exponential, etc.), which allows the distance of each point from a hyperspace to be determined. While mathematically elegant, these approaches may have shortcomings when applied to real world applications where the relative importance, the functional relationship, and the correlation among the variables tend to be unknown. Still, in many applications, each variable can be assumed to have at least an “orientation”, i.e., it can reasonably assumed that, if all other conditions are held constant, an increase in this variable is either “good” or “bad”. The direction of this orientation can be known or unknown. In genetics, for instance, having more “abnormal” alleles may increase the risk (or magnitude) of a disease phenotype. In genomics, the expression of several related genes may indicate disease activity. When screening for security risks, more indicators for atypical behavior may constitute raise more concern, in face or voice recognition, more indicators being similar may increase the likelihood of a person being identified. Methods: In 1998, we developed a nonparametric method for analyzing multivariate ordinal data to assess the overall risk of HIV infection based on different types of behavior or the overall protective effect of barrier methods against HIV infection. By using u-statistics, rather than the marginal likelihood, we were able to increase the computational efficiency of this approach by several orders of magnitude. Results: We applied this approach to assessing immunogenicity of a vaccination strategy in cancer patients. While discussing the pitfalls of the conventional methods for linking quantitative traits to haplotypes, we realized that this approach could be easily modified into to a statistically valid alternative to a previously proposed approaches. We have now begun to use the same methodology to correlate activity of anti-inflammatory drugs along genomic pathways with disease severity of psoriasis based on several clinical and histological characteristics. Conclusion: Multivariate ordinal data are frequently observed to assess semiquantitative characteristics, such as risk profiles (genetic, genomic, or security) or similarity of pattern (faces, voices, behaviors). The conventional methods require empirical validation, because the functions and weights chosen cannot be justified on theoretical grounds. The proposed statistical method for analyzing profiles of ordinal variables, is intrinsically valid. Since no additional assumptions need to be made, the often time-consuming empirical validation can be skipped.ranking; nonparametric; robust; scoring; multivariate

    An Empirical Bayes Approach for Multiple Tissue eQTL Analysis

    Full text link
    Expression quantitative trait loci (eQTL) analyses, which identify genetic markers associated with the expression of a gene, are an important tool in the understanding of diseases in human and other populations. While most eQTL studies to date consider the connection between genetic variation and expression in a single tissue, complex, multi-tissue data sets are now being generated by the GTEx initiative. These data sets have the potential to improve the findings of single tissue analyses by borrowing strength across tissues, and the potential to elucidate the genotypic basis of differences between tissues. In this paper we introduce and study a multivariate hierarchical Bayesian model (MT-eQTL) for multi-tissue eQTL analysis. MT-eQTL directly models the vector of correlations between expression and genotype across tissues. It explicitly captures patterns of variation in the presence or absence of eQTLs, as well as the heterogeneity of effect sizes across tissues. Moreover, the model is applicable to complex designs in which the set of donors can (i) vary from tissue to tissue, and (ii) exhibit incomplete overlap between tissues. The MT-eQTL model is marginally consistent, in the sense that the model for a subset of tissues can be obtained from the full model via marginalization. Fitting of the MT-eQTL model is carried out via empirical Bayes, using an approximate EM algorithm. Inferences concerning eQTL detection and the configuration of eQTLs across tissues are derived from adaptive thresholding of local false discovery rates, and maximum a-posteriori estimation, respectively. We investigate the MT-eQTL model through a simulation study, and rigorously establish the FDR control of the local FDR testing procedure under mild assumptions appropriate for dependent data.Comment: accepted by Biostatistic

    Multiple testing procedures for complex structured hypotheses and directional decisions

    Get PDF
    Several multiple testing procedures are developed based on the inherent structure of the tested hypotheses and specific needs of data analysis. Incorporating the inherent structure of the hypotheses results in development of more powerful and situation-specific multiple testing procedures than existing ones. The focus of this dissertation is on developing multiple testing procedures that utilize the information on this structure of the hypotheses and aims at answering research questions while controlling appropriate error rates. In the first part of the thesis, a mixed directional false discovery rate (mdFDR) controlling procedure is developed in the context of uterine fibroid gene expression data (Davis et al., 2013). The main question of interest that arises in this research is to discover genes associated with various stages of tumor progression, such as tumor onset, growth and development of tumors and large size tumors. To answer such questions, a three-step testing strategy is introduced and a general procedure is proposed that can be used with any mixed directional familywise error rate (mdFWER) controlling procedure for each gene, while controlling the mdFDR as the overall error rate. The procedure is proved to control mdFDR when the underlying test statistics are independent across the genes. A specific methodology, based on the Dunnett procedure, is developed and applied to the uterine fibroid gene expression data of Davis et al. (2013). Several important genes and pathways are identified that play important role in fibroid formation and growth. In the second part, the problem of simultaneously testing many two-sided hypotheses is considered when rejections of null hypotheses are accompanied by claims on the direction of the alternative. The fundamental goal is to construct methods that control the mdFWER, which is the probability of making a Type I or Type III (directional) error. In particular, attention is focused on cases where the hypotheses are ordered as H1, ... , Hn, so that Hi+1 is tested only if H1, ... , Hi have all been previously rejected. This research proves that the conventional fixed sequence procedure, which tests each hypothesis at level α, when augmented with directional decisions, can control mdFWER under independence and positive regression dependence of the test statistics. Another more conservative directional procedure is also developed that strongly controls mdFWER under arbitrary dependence of test statistics. Finally, in the third part, multiple testing procedures are developed for making real-time decisions while testing a sequence of a-priori ordered hypotheses. In large scale multiple testing problems in applications such as stream data, statistical process control, etc., the underlying process is regularly monitored and it is desired to control False Discovery Rate (FDR) while making real time decisions about the process being out of control or not. The existing stepwise FDR controlling procedures, such as the Benjamini-Hochb erg procedure, are not applicable here because of the implicit assumption that all the p-values are available for applying the testing procedure. In this part of the thesis, powerful Fallback-type procedures are developed under various dependencies for controlling FDR that award the critical constants on rejection of a hypothesis. These procedures overcome the drawback of the conventional FDR controlling procedures by making real-time decisions based on partial information available when a hypothesis is tested and allowing testing of each a-priori ordered hypothesis. Simulation studies demonstrate the effectiveness of these procedures in terms of FDR control and average power

    Stability and aggregation of ranked gene lists

    Get PDF
    Ranked gene lists are highly instable in the sense that similar measures of differential gene expression may yield very different rankings, and that a small change of the data set usually affects the obtained gene list considerably. Stability issues have long been under-considered in the literature, but they have grown to a hot topic in the last few years, perhaps as a consequence of the increasing skepticism on the reproducibility and clinical applicability of molecular research findings. In this article, we review existing approaches for the assessment of stability of ranked gene lists and the related problem of aggregation, give some practical recommendations, and warn against potential misuse of these methods. This overview is illustrated through an application to a recent leukemia data set using the freely available Bioconductor package GeneSelector

    A hierarchical Bayesian model for inference of copy number variants and their association to gene expression

    Get PDF
    A number of statistical models have been successfully developed for the analysis of high-throughput data from a single source, but few methods are available for integrating data from different sources. Here we focus on integrating gene expression levels with comparative genomic hybridization (CGH) array measurements collected on the same subjects. We specify a measurement error model that relates the gene expression levels to latent copy number states which, in turn, are related to the observed surrogate CGH measurements via a hidden Markov model. We employ selection priors that exploit the dependencies across adjacent copy number states and investigate MCMC stochastic search techniques for posterior inference. Our approach results in a unified modeling framework for simultaneously inferring copy number variants (CNV) and identifying their significant associations with mRNA transcripts abundance. We show performance on simulated data and illustrate an application to data from a genomic study on human cancer cell lines.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS705 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Mining SOM expression portraits: Feature selection and integrating concepts of molecular function

    Get PDF
    Background: 
Self organizing maps (SOM) enable the straightforward portraying of high-dimensional data of large sample collections in terms of sample-specific images. The analysis of their texture provides so-called spot-clusters of co-expressed genes which require subsequent significance filtering and functional interpretation. We address feature selection in terms of the gene ranking problem and the interpretation of the obtained spot-related lists using concepts of molecular function.

Results: 
Different expression scores based either on simple fold change-measures or on regularized Students t-statistics are applied to spot-related gene lists and compared with special emphasis on the error characteristics of microarray expression data. The spot-clusters are analyzed using different methods of gene set enrichment analysis with the focus on overexpression and/or overrepresentation of predefined sets of genes. Metagene-related overrepresentation of selected gene sets was mapped into the SOM images to assign gene function to different regions. Alternatively we estimated set-related overexpression profiles over all samples studied using a gene set enrichment score. It was also applied to the spot-clusters to generate lists of enriched gene sets. We used the tissue body index data set, a collection of expression data of human tissues, as an illustrative example. We found that tissue related spots typically contain enriched populations of gene sets well corresponding to molecular processes in the respective tissues. In addition, we display special sets of housekeeping and of consistently weak and highly expressed genes using SOM data filtering. 

Conclusions:
The presented methods allow the comprehensive downstream analysis of SOM-transformed expression data in terms of cluster-related gene lists and enriched gene sets for functional interpretation. SOM clustering implies the ability to define either new gene sets using selected SOM spots or to verify and/or to amend existing ones
    • …
    corecore