9,943 research outputs found

    Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization

    Full text link
    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of nonnegative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible writin

    Baxter operators for arbitrary spin

    Full text link
    We construct Baxter operators for the homogeneous closed XXX\mathrm{XXX} spin chain with the quantum space carrying infinite or finite dimensional sβ„“2s\ell_2 representations. All algebraic relations of Baxter operators and transfer matrices are deduced uniformly from Yang-Baxter relations of the local building blocks of these operators. This results in a systematic and very transparent approach where the cases of finite and infinite dimensional representations are treated in analogy. Simple relations between the Baxter operators of both cases are obtained. We represent the quantum spaces by polynomials and build the operators from elementary differentiation and multiplication operators. We present compact explicit formulae for the action of Baxter operators on polynomials.Comment: 37 pages LaTex, 7 figures, version for publicatio

    Mass Dependence of Higgs Production at Large Transverse Momentum

    Full text link
    The transverse momentum distribution of the Higgs at large PTP_T is complicated by its dependence on three important energy scales: PTP_T, the top quark mass mtm_t, and the Higgs mass mHm_H. A strategy for simplifying the calculation of the cross section at large PTP_T is to calculate only the leading terms in its expansion in mt2/PT2m_t^2/P_T^2 and/or mH2/PT2m_H^2/P_T^2. The expansion of the cross section in inverse powers of PTP_T is complicated by logarithms of PTP_T and by mass singularities. In this paper, we consider the top-quark loop contribution to the subprocess qqˉ→H+gq\bar{q}\to H+g at leading order in αs\alpha_s. We show that the leading power of 1/PT21/P_T^2 can be expressed in the form of a factorization formula that separates the large scale PTP_T from the scale of the masses. All the dependence on mtm_t and mHm_H can be factorized into a distribution amplitude for ttˉt \bar t in the Higgs, a distribution amplitude for ttˉt \bar t in a real gluon, and an endpoint contribution. The factorization formula can be used to simplify calculations of the PTP_T distribution at large PTP_T to next-to-leading order in αs\alpha_s.Comment: 49 pages, 8 figure

    Is Simple Better? Revisiting Non-linear Matrix Factorization for Learning Incomplete Ratings

    Full text link
    Matrix factorization techniques have been widely used as a method for collaborative filtering for recommender systems. In recent times, different variants of deep learning algorithms have been explored in this setting to improve the task of making a personalized recommendation with user-item interaction data. The idea that the mapping between the latent user or item factors and the original features is highly nonlinear suggest that classical matrix factorization techniques are no longer sufficient. In this paper, we propose a multilayer nonlinear semi-nonnegative matrix factorization method, with the motivation that user-item interactions can be modeled more accurately using a linear combination of non-linear item features. Firstly, we learn latent factors for representations of users and items from the designed multilayer nonlinear Semi-NMF approach using explicit ratings. Secondly, the architecture built is compared with deep-learning algorithms like Restricted Boltzmann Machine and state-of-the-art Deep Matrix factorization techniques. By using both supervised rate prediction task and unsupervised clustering in latent item space, we demonstrate that our proposed approach achieves better generalization ability in prediction as well as comparable representation ability as deep matrix factorization in the clustering task.Comment: version
    • …
    corecore