38,375 research outputs found

    EMMIXcskew: an R Package for the Fitting of a Mixture of Canonical Fundamental Skew t-Distributions

    Get PDF
    This paper presents an R package EMMIXcskew for the fitting of the canonical fundamental skew t-distribution (CFUST) and finite mixtures of this distribution (FM-CFUST) via maximum likelihood (ML). The CFUST distribution provides a flexible family of models to handle non-normal data, with parameters for capturing skewness and heavy-tails in the data. It formally encompasses the normal, t, and skew-normal distributions as special and/or limiting cases. A few other versions of the skew t-distributions are also nested within the CFUST distribution. In this paper, an Expectation-Maximization (EM) algorithm is described for computing the ML estimates of the parameters of the FM-CFUST model, and different strategies for initializing the algorithm are discussed and illustrated. The methodology is implemented in the EMMIXcskew package, and examples are presented using two real datasets. The EMMIXcskew package contains functions to fit the FM-CFUST model, including procedures for generating different initial values. Additional features include random sample generation and contour visualization in 2D and 3D

    Exchangeable random measures

    Full text link
    Let A be a standard Borel space, and consider the space A^{\bbN^{(k)}} of A-valued arrays indexed by all size-k subsets of \bbN. This paper concerns random measures on such a space whose laws are invariant under the natural action of permutations of \bbN. The main result is a representation theorem for such `exchangeable' random measures, obtained using the classical representation theorems for exchangeable arrays due to de Finetti, Hoover, Aldous and Kallenberg. After proving this representation, two applications of exchangeable random measures are given. The first is a short new proof of the Dovbysh-Sudakov Representation Theorem for exchangeable PSD matrices. The second is in the formulation of a natural class of limit objects for dilute mean-field spin glass models, retaining more information than just the limiting Gram-de Finetti matrix used in the study of the Sherrington-Kirkpatrick model.Comment: 24 pages. [4/23/2013:] Re-written for clarity, but no conceptual changes. [9/12/2013:] Slightly re-written to incorporate referee suggestions. [7/8/15:] Published version available at http://projecteuclid.org/euclid.aihp/143575923

    Four moments theorems on Markov chains

    Full text link
    We obtain quantitative Four Moments Theorems establishing convergence of the laws of elements of a Markov chaos to a Pearson distribution, where the only assumptionwemake on the Pearson distribution is that it admits four moments. While in general one cannot use moments to establish convergence to a heavy-tailed distributions, we provide a context in which only the first four moments suffices. These results are obtained by proving a general carré du champ bound on the distance between laws of random variables in the domain of a Markov diffusion generator and invariant measures of diffusions. For elements of a Markov chaos, this bound can be reduced to just the first four moments.First author draf

    Four moments theorems on Markov chaos

    Get PDF
    We obtain quantitative Four Moments Theorems establishing convergence of the laws of elements of a Markov chaos to a Pearson distribution, where the only assumption we make on the Pearson distribution is that it admits four moments. While in general one cannot use moments to establish convergence to a heavy-tailed distributions, we provide a context in which only the first four moments suffices. These results are obtained by proving a general carr\'e du champ bound on the distance between laws of random variables in the domain of a Markov diffusion generator and invariant measures of diffusions. For elements of a Markov chaos, this bound can be reduced to just the first four moments.Comment: 24 page

    Conjugate Bayes for probit regression via unified skew-normal distributions

    Full text link
    Regression models for dichotomous data are ubiquitous in statistics. Besides being useful for inference on binary responses, these methods serve also as building blocks in more complex formulations, such as density regression, nonparametric classification and graphical models. Within the Bayesian framework, inference proceeds by updating the priors for the coefficients, typically set to be Gaussians, with the likelihood induced by probit or logit regressions for the responses. In this updating, the apparent absence of a tractable posterior has motivated a variety of computational methods, including Markov Chain Monte Carlo routines and algorithms which approximate the posterior. Despite being routinely implemented, Markov Chain Monte Carlo strategies face mixing or time-inefficiency issues in large p and small n studies, whereas approximate routines fail to capture the skewness typically observed in the posterior. This article proves that the posterior distribution for the probit coefficients has a unified skew-normal kernel, under Gaussian priors. Such a novel result allows efficient Bayesian inference for a wide class of applications, especially in large p and small-to-moderate n studies where state-of-the-art computational methods face notable issues. These advances are outlined in a genetic study, and further motivate the development of a wider class of conjugate priors for probit models along with methods to obtain independent and identically distributed samples from the unified skew-normal posterior
    • …
    corecore