3,515 research outputs found

    Infinite factorization of multiple non-parametric views

    Get PDF
    Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale of classical canonical correlation analysis into a flexible, generative and non-parametric clustering setting, by introducing a novel non-parametric hierarchical mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the views. The commonalities between the sources are modeled by an infinite block model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views. Cluster analysis of co-expression is a standard simple way of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation

    Nonparametric Estimation of Multi-View Latent Variable Models

    Full text link
    Spectral methods have greatly advanced the estimation of latent variable models, generating a sequence of novel and efficient algorithms with strong theoretical guarantees. However, current spectral algorithms are largely restricted to mixtures of discrete or Gaussian distributions. In this paper, we propose a kernel method for learning multi-view latent variable models, allowing each mixture component to be nonparametric. The key idea of the method is to embed the joint distribution of a multi-view latent variable into a reproducing kernel Hilbert space, and then the latent parameters are recovered using a robust tensor power method. We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters. Thus, our non-parametric tensor approach to learning latent variable models enjoys good sample and computational efficiencies. Moreover, the non-parametric tensor power method compares favorably to EM algorithm and other existing spectral algorithms in our experiments

    Bondi-Metzner-Sachs symmetry, holography on null-surfaces and area proportionality of "light-slice" entropy

    Full text link
    It is shown that certain kinds of behavior, which hitherto were expected to be characteristic for classical gravity and quantum field theory in curved spacetime, as the infinite dimensional Bondi-Metzner-Sachs symmetry, holography on event horizons and an area proportionality of entropy, have in fact an unnoticed presence in Minkowski QFT. This casts new light on the fundamental question whether the volume propotionality of heat bath entropy and the (logarithmically corrected) dimensionless area law obeyed by localization-induced thermal behavior are different geometric parametrizations which share a common primordeal algebraic origin. Strong arguments are presented that these two different thermal manifestations can be directly related, this is in fact the main aim of this paper. It will be demonstrated that QFT beyond the Lagrangian quantization setting receives crucial new impulses from holography onto horizons. The present paper is part of a project aimed at elucidating the enormous physical range of "modular localization". The latter does not only extend from standard Hamitonian heat bath thermal states to thermal aspects of causal- or event- horizons addressed in this paper. It also includes the recent understanding of the crossing property of formfactors whose intriguing similarity with thermal properties was, although sometimes noticed, only sufficiently understood in the modular llocalization setting.Comment: 42 pages, changes, addition of new results and new references, in this form the paper will appear in Foundations of Physic

    ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly

    Full text link
    Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity.Comment: 22 pages, under review for conference publicatio

    Finite precision measurement nullifies the Kochen-Specker theorem

    Get PDF
    Only finite precision measurements are experimentally reasonable, and they cannot distinguish a dense subset from its closure. We show that the rational vectors, which are dense in S^2, can be colored so that the contradiction with hidden variable theories provided by Kochen-Specker constructions does not obtain. Thus, in contrast to violation of the Bell inequalities, no quantum-over-classical advantage for information processing can be derived from the Kochen-Specker theorem alone.Comment: 7 pages, plain TeX; minor corrections, interpretation clarified, references update

    Factorization of Z-homogeneous polynomials in the First (q)-Weyl Algebra

    Full text link
    We present algorithms to factorize weighted homogeneous elements in the first polynomial Weyl algebra and qq-Weyl algebra, which are both viewed as a Z\mathbb{Z}-graded rings. We show, that factorization of homogeneous polynomials can be almost completely reduced to commutative univariate factorization over the same base field with some additional uncomplicated combinatorial steps. This allows to deduce the complexity of our algorithms in detail. Furthermore, we will show for homogeneous polynomials that irreducibility in the polynomial first Weyl algebra also implies irreducibility in the rational one, which is of interest for practical reasons. We report on our implementation in the computer algebra system \textsc{Singular}. It outperforms for homogeneous polynomials currently available implementations dealing with factorization in the first Weyl algebra both in speed and elegancy of the results.Comment: 26 pages, Singular implementation, 2 algorithms, 1 figure, 2 table
    corecore