13,983 research outputs found

    Estimating SUR Tobit Model while errors are gaussian scale mixtures: with an application to high frequency financial data

    Get PDF
    This paper examines multivariate Tobit system with Scale mixture disturbances. Three estimation methods, namely Maximum Simulated Likelihood, Expectation Maximization Algorithm and Bayesian MCMC simulators, are proposed and compared via generated data experiments. The chief finding is that Bayesian approach outperforms others in terms of accuracy, speed and stability. The proposed model is also applied to a real data set and study the high frequency price and trading volume dynamics. The empirical results confirm the information contents of historical price, lending support to the usefulness of technical analysis. In addition, the scale mixture model is also extended to sample selection SUR Tobit and finite Gaussian regime mixtures.Tobit; Gaussian mixtures; Bayesian

    From here to infinity - sparse finite versus Dirichlet process mixtures in model-based clustering

    Get PDF
    In model-based-clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al (2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with KK components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than KK with high probability. The number of cluster is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.Comment: Accepted versio

    Identifying Mixtures of Mixtures Using Bayesian Estimation

    Get PDF
    The use of a finite mixture of normal distributions in model-based clustering allows to capture non-Gaussian data clusters. However, identifying the clusters from the normal components is challenging and in general either achieved by imposing constraints on the model or by using post-processing procedures. Within the Bayesian framework we propose a different approach based on sparse finite mixtures to achieve identifiability. We specify a hierarchical prior where the hyperparameters are carefully selected such that they are reflective of the cluster structure aimed at. In addition this prior allows to estimate the model using standard MCMC sampling methods. In combination with a post-processing approach which resolves the label switching issue and results in an identified model, our approach allows to simultaneously (1) determine the number of clusters, (2) flexibly approximate the cluster distributions in a semi-parametric way using finite mixtures of normals and (3) identify cluster-specific parameters and classify observations. The proposed approach is illustrated in two simulation studies and on benchmark data sets.Comment: 49 page

    A Tutorial on Bayesian Nonparametric Models

    Full text link
    A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number ofclusters in mixture models or the number of factors in factor analysis. In this tutorial we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.Comment: 28 pages, 8 figure

    Bayesian adaptation

    Full text link
    In the need for low assumption inferential methods in infinite-dimensional settings, Bayesian adaptive estimation via a prior distribution that does not depend on the regularity of the function to be estimated nor on the sample size is valuable. We elucidate relationships among the main approaches followed to design priors for minimax-optimal rate-adaptive estimation meanwhile shedding light on the underlying ideas.Comment: 20 pages, Propositions 3 and 5 adde
    corecore