186 research outputs found

    Finite-dimensional nonparametric priors: theory and applications

    Get PDF
    The investigation of flexible classes of discrete prior has been an active research line in Bayesian statistics. Several contributions were devoted to the study of nonparametric priors, including the Dirichlet process, the Pitman–Yor process and normalized random measures with independent increments (NRMI). In contrast, only few finite-dimensional discrete priors are known, and even less come with sufficient theoretical guarantees. In this thesis we aim at filling this gap by presenting several novel general classes of parametric priors closely connected to well-known infinite-dimensional processes, which are recovered as limiting case. A priori and posteriori properties are extensively studied. For instance, we determine explicit expressions for the induced random partition, the associated urn schemes and the posterior distributions. Furthermore, we exploit finite-dimensional approximations to facilitate posterior computations in complex models beyond the exchangeability framework. Our theoretical and computational findings are employed in a variety of real statistical problems, covering toxicological, sociological, and marketing applications

    Advances in Bayesian Inference for Binary and Categorical Data

    Get PDF
    No abstract availableBayesian binary probit regression and its extensions to time-dependent observations and multi-class responses are popular tools in binary and categorical data regression due to their high interpretability and non-restrictive assumptions. Although the theory is well established in the frequentist literature, such models still face a florid research in the Bayesian framework.This is mostly due to the fact that state-of-the-art methods for Bayesian inference in such settings are either computationally impractical or inaccurate in high dimensions and in many cases a closed-form expression for the posterior distribution of the model parameters is, apparently, lacking.The development of improved computational methods and theoretical results to perform inference with this vast class of models is then of utmost importance. In order to overcome the above-mentioned computational issues, we develop a novel variational approximation for the posterior of the coefficients in high-dimensional probit regression with binary responses and Gaussian priors, resulting in a unified skew-normal (SUN) approximating distribution that converges to the exact posterior as the number of predictors p increases. Moreover, we show that closed-form expressions are actually available for posterior distributions arising from models that account for correlated binary time-series and multi-class responses. In the former case, we prove that the filtering, predictive and smoothing distributions in dynamic probit models with Gaussian state variables are, in fact, available and belong to a class of SUN distributions whose parameters can be updated recursively in time via analytical expressions, allowing to develop an i.i.d. sampler together with an optimal sequential Monte Carlo procedure. As for the latter case, i.e. multi-class probit models, we show that many different formulations developed in the literature in separate ways admit a unified view and a closed-form SUN posterior distribution under a SUN prior distribution (thus including the Gaussian case). This allows to implement computational methods which outperform state-of-the-art routines in high-dimensional settings by leveraging SUN properties and the variational methods introduced for the binary probit. Finally, motivated also by the possible linkage of some of the above-mentioned models to the Bayesian nonparametrics literature, a novel species-sampling model for partially-exchangeable observations is introduced, with the double goal of both predicting the class (or species) of the future observations and testing for homogeneity among the different available populations. Such model arises from a combination of Pitman-Yor processes and leverages on the appealing features of both hierarchical and nested structures developed in the Bayesian nonparametrics literature. Posterior inference is feasible thanks to the implementation of a marginal Gibbs sampler, whose pseudo-code is given in full detail

    Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy

    Full text link
    Probabilistic (Bayesian) modeling has experienced a surge of applications in almost all quantitative sciences and industrial areas. This development is driven by a combination of several factors, including better probabilistic estimation algorithms, flexible software, increased computing power, and a growing awareness of the benefits of probabilistic learning. However, a principled Bayesian model building workflow is far from complete and many challenges remain. To aid future research and applications of a principled Bayesian workflow, we ask and provide answers for what we perceive as two fundamental questions of Bayesian modeling, namely (a) "What actually is a Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to the first question, we propose the PAD model taxonomy that defines four basic kinds of Bayesian models, each representing some combination of the assumed joint distribution of all (known or unknown) variables (P), a posterior approximator (A), and training data (D). As an answer to the second question, we propose ten utility dimensions according to which we can evaluate Bayesian models holistically, namely, (1) causal consistency, (2) parameter recoverability, (3) predictive performance, (4) fairness, (5) structural faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9) estimation speed, and (10) robustness. Further, we propose two example utility decision trees that describe hierarchies and trade-offs between utilities depending on the inferential goals that drive model building and testing

    Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models

    Get PDF
    A computationally simple approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics for the observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a 'match' between observed and simulated summaries are retained, and used to estimate the inaccessible posterior. With no reduction to a low-dimensional set of sufficient statistics being possible in the state space setting, we define the summaries as the maximum of an auxiliary likelihood function, and thereby exploit the asymptotic sufficiency of this estimator for the auxiliary parameter vector. We derive conditions under which this approach - including a computationally efficient version based on the auxiliary score - achieves Bayesian consistency. To reduce the well-documented inaccuracy of ABC in multi-parameter settings, we propose the separate treatment of each parameter dimension using an integrated likelihood technique. Three stochastic volatility models for which exact Bayesian inference is either computationally challenging, or infeasible, are used for illustration. We demonstrate that our approach compares favorably against an extensive set of approximate and exact comparators. An empirical illustration completes the paper.Comment: This paper is forthcoming at the Journal of Computational and Graphical Statistics. It also supersedes the earlier arXiv paper "Approximate Bayesian Computation in State Space Models" (arXiv:1409.8363

    Variational Bayesian Learning and its Applications

    Get PDF
    This dissertation is devoted to studying a fast and analytic approximation method, called the variational Bayesian (VB) method, and aims to give insight into its general applicability and usefulness, and explore its applications to various real-world problems. This work has three main foci: 1) The general applicability and properties; 2) Diagnostics for VB approximations; 3) Variational applications. Generally, the variational inference has been developed in the context of the exponential family, which is open to further development. First, it usually consider the cases in the context of the conjugate exponential family. Second, the variational inferences are developed only with respect to natural parameters, which are often not the parameters of immediate interest. Moreover, the full factorization, which assumes all terms to be independent of one another, is the most commonly used scheme in the most of the variational applications. We show that VB inferences can be extended to a more general situation. We propose a special parameterization for a parametric family, and also propose a factorization scheme with a more general dependency structure than is traditional in VB. Based on these new frameworks, we develop a variational formalism, in which VB has a fast implementation, and not be limited to the conjugate exponential setting. We also investigate its local convergence property, the effects of choosing different priors, and the effects of choosing different factorization scheme. The essence of the VB method relies on making simplifying assumptions about the posterior dependence of a problem. By definition, the general posterior dependence structure is distorted. In addition, in the various applications, we observe that the posterior variances are often underestimated. We aim to develop diagnostics test to assess VB approximations, and these methods are expected to be quick and easy to use, and to require no sophisticated tuning expertise. We propose three methods to compute the actual posterior covariance matrix by only using the knowledge obtained from VB approximations: 1) To look at the joint posterior distribution and attempt to find an optimal affine transformation that links the VB and true posteriors; 2) Based on a marginal posterior density approximation to work in specific low dimensional directions to estimate true posterior variances and correlations; 3) Based on a stepwise conditional approach, to construct and solve a set of system of equations that lead to estimates of the true posterior variances and correlations. A key computation in the above methods is to calculate a uni-variate marginal or conditional variance. We propose a novel way, called the VB Adjusted Independent Metropolis-Hastings (VBAIMH) method, to compute these quantities. It uses an independent Metropolis-Hastings (IMH) algorithm with proposal distributions configured by VB approximations. The variance of the target distribution is obtained by monitoring the acceptance rate of the generated chain. One major question associated with the VB method is how well the approximations can work. We particularly study the mean structure approximations, and show how it is possible using VB approximations to approach model selection tasks such as determining the dimensionality of a model, or variable selection. We also consider the variational application in Bayesian nonparametric modeling, especially for the Dirichlet process (DP). The posterior inference for DP has been extensively studied in the context of MCMC methods. This work presents a a full variational solution for DP with non-conjugate settings. Our solution uses a truncated stick-breaking representation. We propose an empirical method to determine the number of distinct components in a finite dimensional DP. The posterior predictive distribution for DP is often not available in a closed form. We show how to use the variational techniques to approximate this quantity. As a concrete application study, we work through the VB method on regime-switching lognormal models and present solutions to quantify both the uncertainty in the parameters and model specification. Through a series numerical comparison studies with likelihood based methods and MCMC methods on the simulated and real data sets, we show that the VB method can recover exactly the model structure, gives the reasonable point estimates, and is very computationally efficient
    • …
    corecore