95 research outputs found

    A Comparison of Marginal Likelihood Computation Methods

    Get PDF
    In a Bayesian analysis, different models can be compared on the basis of theexpected or marginal likelihood they attain. Many methods have been devised to compute themarginal likelihood, but simplicity is not the strongest point of most methods. At the sametime, the precision of methods is often questionable.In this paper several methods are presented in a common framework. The explanation of thedifferences is followed by an application, in which the precision of the methods is testedon a simple regression model where a comparison with analytical results is possible

    Editors’ Perspective on the Use of Visual Displays in Qualitative Studies

    Get PDF
    Research indicates that visual displays in qualitative research are under-utilized and under-developed. This study aimed to reach a clearer understanding of this fact by learning from the perspective of seven editors in qualitative journals. Using a qualitative descriptive design this study explored what constitutes an appropriate and helpful use of visual displays, including examples from current practices, and recommendations in the use and creation of visual displays. This paper reveals new insights by experts and very knowledgeable personalities in the area of qualitative research. The experts’ vision provided information that favors inclusion of visuals in qualitative studies as well as reckons the need for enhancement of curricula in qualitative research education to involve teaching about and practicing alternative representations of data analysis including the use of visuals. This paper concludes with a new classification of visual displays based on their occurrence within a research report, and a list of the main criteria points used by editors for assessing the validity of visuals in qualitative research articles. Additionally, we include implications for qualitative researchers and educators interested to increase the use of visuals in qualitative articles

    Modelling the evolution of distributions : an application to major league baseball

    Get PDF
    We develop Bayesian techniques for modelling the evolution of entire distributions over time and apply them to the distribution of team performance in Major League baseball for the period 1901-2000. Such models offer insight into many key issues (e.g. competitive balance) in a way that regression-based models cannot. The models involve discretizing the distribution and then modelling the evolution of the bins over time through transition probability matrices. We allow for these matrices to vary over time and across teams. We find that, with one exception, the transition probability matrices (and, hence, competitive balance) have been remarkably constant across time and over teams. The one exception is the Yankees, who have outperformed all other teams

    A New Calibrated Bayesian Internal Goodness-of-Fit Method: Sampled Posterior p-Values as Simple and General p-Values That Allow Double Use of the Data

    Get PDF
    Background: Recent approaches mixing frequentist principles with Bayesian inference propose internal goodness-of-fit (GOF) p-values that might be valuable for critical analysis of Bayesian statistical models. However, GOF p-values developed to date only have known probability distributions under restrictive conditions. As a result, no known GOF p-value has a known probability distribution for any discrepancy function. Methodology/Principal Findings: We show mathematically that a new GOF p-value, called the sampled posterior p-value (SPP), asymptotically has a uniform probability distribution whatever the discrepancy function. In a moderate finite sample context, simulations also showed that the SPP appears stable to relatively uninformative misspecifications of the prior distribution. Conclusions/Significance: These reasons, together with its numerical simplicity, make the SPP a better canonical GOF p-value than existing GOF p-values

    Bayesian Inference for Structural Vector Autoregressions Identified by Markov-Switching Heteroskedasticity

    Full text link
    In this study, Bayesian inference is developed for structural vector autoregressive models in which the structural parameters are identified via Markov-switching heteroskedasticity. In such a model, restrictions that are just-identifying in the homoskedastic case, become over-identifying and can be tested. A set of parametric restrictions is derived under which the structural matrix is globally or partially identified and a Savage-Dickey density ratio is used to assess the validity of the identification conditions. The latter is facilitated by analytical derivations that make the computations fast and numerical standard errors small. As an empirical example, monetary models are compared using heteroskedasticity as an additional device for identification. The empirical results support models with money in the interest rate reaction function.Comment: Keywords: Identification Through Heteroskedasticity, Bayesian Hypotheses Assessment, Markov-switching Models, Mixture Models, Regime Chang

    Evidence on a Real Business Cycle Model with Neutral and Investment-Specific Technology Shocks Using Bayesian Model Averaging

    Full text link
    The empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class ofvector autoregressive (VAR) processes. The linear VAR model is extendedto permit cointegration, a range of deterministic processes, equilibrium restrictions and restrictions on long-run responses to technology shocks. Wefind support for a number of the features implied by the real business cyclemodel. For example, restricting long run responses to identify technologyshocks has reasonable support and important implications for the short runresponses to these shocks. Further, there is evidence that savings and investment ratios form stable relationships, but technology shocks do not accountfor all stochastic trends in our system. There is uncertainty as to the mostappropriate model for our data, with thirteen models receiving similar support, and the model or model set used has signficant implications for theresults obtained

    BRIE: transcriptome-wide splicing quantication in single cells

    Get PDF
    Abstract Single-cell RNA-seq (scRNA-seq) provides a comprehensive measurement of stochasticity in transcription, but the limitations of the technology have prevented its application to dissect variability in RNA processing events such as splicing. Here, we present BRIE (Bayesian regression for isoform estimation), a Bayesian hierarchical model that resolves these problems by learning an informative prior distribution from sequence features. We show that BRIE yields reproducible estimates of exon inclusion ratios in single cells and provides an effective tool for differential isoform quantification between scRNA-seq data sets. BRIE, therefore, expands the scope of scRNA-seq experiments to probe the stochasticity of RNA processing

    Efficient posterior probability mapping using savage-dickey ratios.

    Get PDF
    Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner
    corecore