6,183 research outputs found

    An approximate Bayesian marginal likelihood approach for estimating finite mixtures

    Full text link
    Estimation of finite mixture models when the mixing distribution support is unknown is an important problem. This paper gives a new approach based on a marginal likelihood for the unknown support. Motivated by a Bayesian Dirichlet prior model, a computationally efficient stochastic approximation version of the marginal likelihood is proposed and large-sample theory is presented. By restricting the support to a finite grid, a simulated annealing method is employed to maximize the marginal likelihood and estimate the support. Real and simulated data examples show that this novel stochastic approximation--simulated annealing procedure compares favorably to existing methods.Comment: 16 pages, 1 figure, 3 table

    On approximating copulas by finite mixtures

    Full text link
    Copulas are now frequently used to approximate or estimate multivariate distributions because of their ability to take into account the multivariate dependence of the variables while controlling the approximation properties of the marginal densities. Copula based multivariate models can often also be more parsimonious than fitting a flexible multivariate model, such as a mixture of normals model, directly to the data. However, to be effective, it is imperative that the family of copula models considered is sufficiently flexible. Although finite mixtures of copulas have been used to construct flexible families of copulas, their approximation properties are not well understood and we show that natural candidates such as mixtures of elliptical copulas and mixtures of Archimedean copulas cannot approximate a general copula arbitrarily well. Our article develops fundamental tools for approximating a general copula arbitrarily well by a mixture and proposes a family of finite mixtures that can do so. We illustrate empirically on a financial data set that our approach for estimating a copula can be much more parsimonious and results in a better fit than approximating the copula by a mixture of normal copulas.Comment: 26 pages and 1 figure and 2 table

    Semiparametric inference in mixture models with predictive recursion marginal likelihood

    Full text link
    Predictive recursion is an accurate and computationally efficient algorithm for nonparametric estimation of mixing densities in mixture models. In semiparametric mixture models, however, the algorithm fails to account for any uncertainty in the additional unknown structural parameter. As an alternative to existing profile likelihood methods, we treat predictive recursion as a filter approximation to fitting a fully Bayes model, whereby an approximate marginal likelihood of the structural parameter emerges and can be used for inference. We call this the predictive recursion marginal likelihood. Convergence properties of predictive recursion under model mis-specification also lead to an attractive construction of this new procedure. We show pointwise convergence of a normalized version of this marginal likelihood function. Simulations compare the performance of this new marginal likelihood approach that of existing profile likelihood methods as well as Dirichlet process mixtures in density estimation. Mixed-effects models and an empirical Bayes multiple testing application in time series analysis are also considered

    Importance sampling schemes for evidence approximation in mixture models

    Full text link
    The marginal likelihood is a central tool for drawing Bayesian inference about the number of components in mixture models. It is often approximated since the exact form is unavailable. A bias in the approximation may be due to an incomplete exploration by a simulated Markov chain (e.g., a Gibbs sequence) of the collection of posterior modes, a phenomenon also known as lack of label switching, as all possible label permutations must be simulated by a chain in order to converge and hence overcome the bias. In an importance sampling approach, imposing label switching to the importance function results in an exponential increase of the computational cost with the number of components. In this paper, two importance sampling schemes are proposed through choices for the importance function; a MLE proposal and a Rao-Blackwellised importance function. The second scheme is called dual importance sampling. We demonstrate that this dual importance sampling is a valid estimator of the evidence and moreover show that the statistical efficiency of estimates increases. To reduce the induced high demand in computation, the original importance function is approximated but a suitable approximation can produce an estimate with the same precision and with reduced computational workload.Comment: 24 pages, 5 figure
    • …
    corecore