29,608 research outputs found

    Simulation-based Estimation of Mean and Standard Deviation for Meta-analysis via Approximate Bayesian Computation (ABC)

    Full text link
    Background: When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they need to must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. Methods: We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). Results: In the estimation of the standard deviation, our ABC method performs best in skewed or heavy-tailed distributions. The average relative error (ARE) approaches zero as sample size increases. In the normal distribution, our ABC performs well. However, the Wan et al. method is best since it is based on the normal distribution assumption. When the distribution is skewed or heavy-tailed, the ARE of Wan et al. moves away from zero even as sample size increases. In the estimation of the mean, our ABC method is best since the AREs converge to zero. Conclusion: ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95% credible interval when Bayesian analysis has been employed

    Phase diagram of a 2D Ising model within a nonextensive approach

    Full text link
    In this work we report Monte Carlo simulations of a 2D Ising model, in which the statistics of the Metropolis algorithm is replaced by the nonextensive one. We compute the magnetization and show that phase transitions are present for q≠1q\neq 1. A q−q - phase diagram (critical temperature vs. the entropic parameter qq) is built and exhibits some interesting features, such as phases which are governed by the value of the entropic index qq. It is shown that such phases favors some energy levels of magnetization states. It is also showed that the contribution of the Tsallis cutoff is essential to the existence of phase transitions

    Unexpected correlations between gene expression and codon usage bias from microarray data for the whole Escherichia coli K-12 genome

    Get PDF
    Escherichia coli has long been regarded as a model organism in the study of codon usage bias (CUB). However, most studies in this organism regarding this topic have been computational or, when experimental, restricted to small datasets; particularly poor attention has been given to genes with low CUB. In this work, correspondence analysis on codon usage is used to classify E.coli genes into three groups, and the relationship between them and expression levels from microarray experiments is studied. These groups are: group 1, highly biased genes; group 2, moderately biased genes; and group 3, AT-rich genes with low CUB. It is shown that, surprisingly, there is a negative correlation between codon bias and expression levels for group 3 genes, i.e. genes with extremely low codon adaptation index (CAI) values are highly expressed, while group 2 show the lowest average expression levels and group 1 show the usual expected positive correlation between CAI and expression. This trend is maintained over all functional gene groups, seeming to contradict the E.coli–yeast paradigm on CUB. It is argued that these findings are still compatible with the mutation–selection balance hypothesis of codon usage and that E.coli genes form a dynamic system shaped by these factors

    Solving the riddle of codon usage preferences: a test for translational selection

    Get PDF
    Translational selection is responsible for the unequal usage of synonymous codons in protein coding genes in a wide variety of organisms. It is one of the most subtle and pervasive forces of molecular evolution, yet, establishing the underlying causes for its idiosyncratic behaviour across living kingdoms has proven elusive to researchers over the past 20 years. In this study, a statistical model for measuring translational selection in any given genome is developed, and the test is applied to 126 fully sequenced genomes, ranging from archaea to eukaryotes. It is shown that tRNA gene redundancy and genome size are interacting forces that ultimately determine the action of translational selection, and that an optimal genome size exists for which this kind of selection is maximal. Accordingly, genome size also presents upper and lower boundaries beyond which selection on codon usage is not possible. We propose a model where the coevolution of genome size and tRNA genes explains the observed patterns in translational selection in all living organisms. This model finally unifies our understanding of codon usage across prokaryotes and eukaryotes. Helicobacter pylori, Saccharomyces cerevisiae and Homo sapiens are codon usage paradigms that can be better understood under the proposed model
    • 

    corecore