2,387,604 research outputs found

    Multidimensional Membership Mixture Models

    Full text link
    We present the multidimensional membership mixture (M3) models where every dimension of the membership represents an independent mixture model and each data point is generated from the selected mixture components jointly. This is helpful when the data has a certain shared structure. For example, three unique means and three unique variances can effectively form a Gaussian mixture model with nine components, while requiring only six parameters to fully describe it. In this paper, we present three instantiations of M3 models (together with the learning and inference algorithms): infinite, finite, and hybrid, depending on whether the number of mixtures is fixed or not. They are built upon Dirichlet process mixture models, latent Dirichlet allocation, and a combination respectively. We then consider two applications: topic modeling and learning 3D object arrangements. Our experiments show that our M3 models achieve better performance using fewer topics than many classic topic models. We also observe that topics from the different dimensions of M3 models are meaningful and orthogonal to each other.Comment: 9 pages, 7 figure

    Generative linear mixture modelling.

    Get PDF
    For multivariate data with a low–dimensional latent structure, a novel approach to linear dimension reduction based on Gaussian mixture models is pro- posed. A generative model is assumed for the data, where the mixture centres (or ‘mass points’) are positioned along lines or planes spanned through the data cloud. All involved parameters are estimated simultaneously through the EM al- gorithm, requiring an additional iteration within each M-step. Data points can be projected onto the low–dimensional space by taking the posterior mean over the estimated mass points. The compressed data can then be used for further pro- cessing, for instance as a low–dimensional predictor in a multivariate regression problem

    Deep Gaussian Mixture Models

    Get PDF
    Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships. In this work, Deep Gaussian Mixture Models are introduced and discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers of latent variables, where, at each layer, the variables follow a mixture of Gaussian distributions. Thus, the deep mixture model consists of a set of nested mixtures of linear models, which globally provide a nonlinear model able to describe the data in a very flexible way. In order to avoid overparameterized solutions, dimension reduction by factor models can be applied at each layer of the architecture thus resulting in deep mixtures of factor analysers.Comment: 19 pages, 4 figure

    Multivariate normal mixture GARCH

    Get PDF
    We present a multivariate generalization of the mixed normal GARCH model proposed in Haas, Mittnik, and Paolella (2004a). Issues of parametrization and estimation are discussed. We derive conditions for covariance stationarity and the existence of the fourth moment, and provide expressions for the dynamic correlation structure of the process. These results are also applicable to the single-component multivariate GARCH(p, q) model and simplify the results existing in the literature. In an application to stock returns, we show that the disaggregation of the conditional (co)variance process generated by our model provides substantial intuition, and we highlight a number of findings with potential significance for portfolio selection and further financial applications, such as regime-dependent correlation structures and leverage effects. Klassifikation: C32, C51, G10, G11Die vorliegende Arbeit ist einer multivariaten Verallgemeinerung des sog. Normal Mixture GARCH Modells gewidmet, dessen univariate Variante von Haas, Mittnik und Paolella (2004a, siehe auch CFS Working Paper 2002/10) vorgeschlagen wurde. Dieses Modell unterscheidet sich von traditionellen GARCH-AnsĂ€tzen insbesondere dadurch, dass es eine AbhĂ€ngigkeit der Risikoentwicklung von - typischerweise unbeobachtbaren - MarktzustĂ€nden explizit in Rechnung stellt. Dies wird durch die Beobachtung motiviert, dass das weit verbreitete GARCH Modell in seiner Standardvariante auch dann keine adĂ€quate Beschreibung der Risikodynamik leistet, wenn die Normalverteilung durch flexiblere bedingte Verteilungen ersetzt wird. ZustandsabhĂ€ngige VolatilitĂ€tsprozesse können etwa durch die variierende Dominanz heterogener Marktteilnehmer oder durch wechselnde Marktstimmungen ökonomisch zu erklĂ€ren sein. Anwendungen des Normal Mixture GARCH Modells auf zahlreiche Aktien- und Wechselkurszeitreihen (siehe z.B. Alexander und Lazar, 2004, 2005; und Haas, Mittnik und Paolella, 2004a,b) haben gezeigt, dass es sich zur Modellierung und Prognose des VolatilitĂ€tsprozesses der Renditen solcher Aktiva hervorragend eignet. Indes beschrĂ€nken sich diese Analysen bisher auf die Untersuchung univariater Zeitreihen. Zahlreiche Probleme der Finanzwirtschaft erfordern jedoch zwingend eine multivariate Modellierung, mithin also eine Beschreibung der AbhĂ€ngigkeitsstruktur zwischen den Renditen verschiedener Wertpapiere. Insbesondere fĂŒr solche Analysen erweist sich der Mischungsansatz aber als besonders vielversprechend. So spielen etwa im Portfoliomanagement die Korrelationen zwischen einzelnen Wertpapierrenditen eine herausragende Rolle. Die StĂ€rke der Korrelationen ist von entscheidender Bedeutung dafĂŒr, in welchem Ausmaß das Risiko eines effizienten Portfolios durch Diversifikation reduziert werden kann. Nun gibt es empirische Hinweise darauf, dass die Korrelationen etwa zwischen Aktien in Perioden, die durch starke Marktschwankungen und tendenziell fallende Kurse charakterisiert sind, stĂ€rker sind als in ruhigeren Perioden. Das bedeutet, dass die Vorteile der Diversifikation in genau jenen Perioden geringer sind, in denen ihr Nutzen am grĂ¶ĂŸten wĂ€re. Modelle, die die Existenz unterschiedlicher Marktregime nicht berĂŒcksichtigen, werden daher dazu tendieren, die Korrelationen in den adversen MarktzustĂ€nden zu unterschĂ€tzen. Dies kann zu erheblichen FehleinschĂ€tzungen des tatsĂ€chlichen Risikos wĂ€hrend solcher Perioden fĂŒhren. Diese und weitere Implikationen des Mischungsansatzes im Kontext multivariater GARCH Modelle werden in der vorliegenden Arbeit diskutiert, und ihre Relevanz wird anhand einer empirischen Anwendung dokumentiert. Erörtert werden ferner Fragen der Parametrisierung und SchĂ€tzung des Modells, und einige relevante theoretische Eigenschaften werden hergeleitet

    Generative Mixture of Networks

    Full text link
    A generative model based on training deep architectures is proposed. The model consists of K networks that are trained together to learn the underlying distribution of a given data set. The process starts with dividing the input data into K clusters and feeding each of them into a separate network. After few iterations of training networks separately, we use an EM-like algorithm to train the networks together and update the clusters of the data. We call this model Mixture of Networks. The provided model is a platform that can be used for any deep structure and be trained by any conventional objective function for distribution modeling. As the components of the model are neural networks, it has high capability in characterizing complicated data distributions as well as clustering data. We apply the algorithm on MNIST hand-written digits and Yale face datasets. We also demonstrate the clustering ability of the model using some real-world and toy examples.Comment: 9 page
    • 

    corecore