3,554 research outputs found

    GaGa: A parsimonious and flexible model for differential expression analysis

    Full text link
    Hierarchical models are a powerful tool for high-throughput data with a small to moderate number of replicates, as they allow sharing information across units of information, for example, genes. We propose two such models and show its increased sensitivity in microarray differential expression applications. We build on the gamma--gamma hierarchical model introduced by Kendziorski et al. [Statist. Med. 22 (2003) 3899--3914] and Newton et al. [Biostatistics 5 (2004) 155--176], by addressing important limitations that may have hampered its performance and its more widespread use. The models parsimoniously describe the expression of thousands of genes with a small number of hyper-parameters. This makes them easy to interpret and analytically tractable. The first model is a simple extension that improves the fit substantially with almost no increase in complexity. We propose a second extension that uses a mixture of gamma distributions to further improve the fit, at the expense of increased computational burden. We derive several approximations that significantly reduce the computational cost. We find that our models outperform the original formulation of the model, as well as some other popular methods for differential expression analysis. The improved performance is specially noticeable for the small sample sizes commonly encountered in high-throughput experiments. Our methods are implemented in the freely available Bioconductor gaga package.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS244 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The educational effectiveness of bilingual education

    Get PDF
    Bilingual education is the use of the native tongue to instruct limited Englishspeaking children. The authors read studies of bilingual education from the earliest period of this literature to the most recent. Of the 300 program evaluations read, only 72 (25%) were methodologically acceptable - that is, they had a treatment and control group and a statistical control for pre-treatment differences where groups were not randomly assigned. Virtually all of the studies in the United States were of elementary or junior high school students and Spanish speakers; The few studies conducted outside the United States were almost all in Canada. The research evidence indicates that, on standardized achievement tests, transitional bilingual education (TBE) is better than regular classroom instruction in only 22% of the methodologically acceptable studies when the outcome is reading, 7% of the studies when the outcome is language, and 9% of the studies when the outcome is math. TBE is never better than structured immersion, a special program for limited English proficient children where the children are in a self-contained classroom composed solely of English learners, but the instruction is in English at a pace they can understand. Thus, the research evidence does not support transitional bilingual education as a superior form of instruction for limited English proficient children

    On choosing mixture components via non-local priors

    Get PDF
    Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly-separated components of limited practical use. We formalize non-local priors (NLPs) for mixtures and show how they lead to well-separated components with non-negligible weight, interpretable as distinct subpopulations. We also propose an estimator for posterior model probabilities under local and non-local priors, showing that Bayes factors are ratios of posterior to prior empty-cluster probabilities. The estimator is widely applicable and helps set thresholds to drop unoccupied components in overfitted mixtures. We suggest default prior parameters based on multi-modality for Normal/T mixtures and minimal informativeness for categorical outcomes. We characterise theoretically the NLP-induced sparsity, derive tractable expressions and algorithms. We fully develop Normal, Binomial and product Binomial mixtures but the theory, computation and principles hold more generally. We observed a serious lack of sensitivity of the Bayesian information criterion (BIC), insufficient parsimony of the AIC and a local prior, and a mixed behavior of the singular BIC. We also considered overfitted mixtures, their performance was competitive but depended on tuning parameters. Under our default prior elicitation NLPs offered a good compromise between sparsity and power to detect meaningfully-separated components

    Emotional processes in understanding and treating psychosis

    Get PDF
    corecore