257 research outputs found

    Test for submodel in Gibbs-Markov binary random sequence

    Get PDF

    Multilevel Joint Analysis of Longitudinal and Binary Outcomes

    Get PDF
    Joint modeling has become a topic of great interest in recent years. The models are simultaneously analyzed using a shared random effect that is common across the two components. While these methods are useful when time-to-event data are available, there are many cases where the outcome of interest is binary and a logistic regression model is used. We propose the use of a joint model with a logistic regression model being used for the binary outcome and a hierarchical mixed effects model being used for the longitudinal outcome. We link the two sub-models using both subject and cluster level random effects and compare it with models using only one level of random effects. We use the Gaussian quadrature technique implemented in the software package aML (Multiprocess Multilevel Modeling software). Simulation studies are presented to illustrate the properties of the proposed model. We also applied our model to the repeated measures of mid-arm muscle circumference (MAMC) and mortality rate for patients within 75 units from 15 centers from a randomized study of hemodialysis (HEMO) and found that the model performs well. We further extend this work by developing methods that can be used to calculate individualized predictions based on our proposed joint model. We use the Bayesian approach to obtain these predictions and implement the method in the software package WinBUGS. The proposed method provides a mechanism for understanding the relationship between a longitudinal measure and a given binary outcome. Thus, it can be used to address several types of public health problems. First, it can be used to understand how changes in a biomarker or other longitudinal measure are related to changes in status of a subject. Second, it can be used to predict the outcome of a subject based on the trajectory of the longitudinal outcome providing information that can be used in a personalized medicine setting. This allows researchers to identify potentially harmful patterns and intervene at an earlier stage

    On computational tools for Bayesian data analysis

    Full text link
    While Robert and Rousseau (2010) addressed the foundational aspects of Bayesian analysis, the current chapter details its practical aspects through a review of the computational methods available for approximating Bayesian procedures. Recent innovations like Monte Carlo Markov chain, sequential Monte Carlo methods and more recently Approximate Bayesian Computation techniques have considerably increased the potential for Bayesian applications and they have also opened new avenues for Bayesian inference, first and foremost Bayesian model choice.Comment: This is a chapter for the book "Bayesian Methods and Expert Elicitation" edited by Klaus Bocker, 23 pages, 9 figure

    A Bayesian information criterion for singular models

    Full text link
    We consider approximate Bayesian model choice for model selection problems that involve models whose Fisher-information matrices may fail to be invertible along other competing submodels. Such singular models do not obey the regularity conditions underlying the derivation of Schwarz's Bayesian information criterion (BIC) and the penalty structure in BIC generally does not reflect the frequentist large-sample behavior of their marginal likelihood. While large-sample theory for the marginal likelihood of singular models has been developed recently, the resulting approximations depend on the true parameter value and lead to a paradox of circular reasoning. Guided by examples such as determining the number of components of mixture models, the number of factors in latent factor models or the rank in reduced-rank regression, we propose a resolution to this paradox and give a practical extension of BIC for singular model selection problems

    Leveraging the Exact Likelihood of Deep Latent Variable Models

    Get PDF
    Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. Variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. The purpose of this work is to study the general properties of this quantity and to show how they can be leveraged in practice. We focus on important inferential problems that rely on the likelihood: estimation and missing data imputation. First, we investigate maximum likelihood estimation for DLVMs: in particular, we show that most unconstrained models used for continuous data have an unbounded likelihood function. This problematic behaviour is demonstrated to be a source of mode collapse. We also show how to ensure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models. Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model. On several data sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs

    Toric Statistical Models: Ising and Markov

    Full text link
    This is a review of current research in Markov chains as toric statistical models. Its content is mixture of background information, results from the relevant recent literature, new results, and work in progress.Comment: 26 pages. Submitted Oct 10, 201
    corecore