9,943 research outputs found
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible
writin
Baxter operators for arbitrary spin
We construct Baxter operators for the homogeneous closed spin
chain with the quantum space carrying infinite or finite dimensional
representations. All algebraic relations of Baxter operators and transfer
matrices are deduced uniformly from Yang-Baxter relations of the local building
blocks of these operators. This results in a systematic and very transparent
approach where the cases of finite and infinite dimensional representations are
treated in analogy. Simple relations between the Baxter operators of both cases
are obtained. We represent the quantum spaces by polynomials and build the
operators from elementary differentiation and multiplication operators. We
present compact explicit formulae for the action of Baxter operators on
polynomials.Comment: 37 pages LaTex, 7 figures, version for publicatio
Mass Dependence of Higgs Production at Large Transverse Momentum
The transverse momentum distribution of the Higgs at large is
complicated by its dependence on three important energy scales: , the top
quark mass , and the Higgs mass . A strategy for simplifying the
calculation of the cross section at large is to calculate only the
leading terms in its expansion in and/or . The
expansion of the cross section in inverse powers of is complicated by
logarithms of and by mass singularities. In this paper, we consider the
top-quark loop contribution to the subprocess at leading
order in . We show that the leading power of can be
expressed in the form of a factorization formula that separates the large scale
from the scale of the masses. All the dependence on and can
be factorized into a distribution amplitude for in the Higgs, a
distribution amplitude for in a real gluon, and an endpoint
contribution. The factorization formula can be used to simplify calculations of
the distribution at large to next-to-leading order in .Comment: 49 pages, 8 figure
Is Simple Better? Revisiting Non-linear Matrix Factorization for Learning Incomplete Ratings
Matrix factorization techniques have been widely used as a method for
collaborative filtering for recommender systems. In recent times, different
variants of deep learning algorithms have been explored in this setting to
improve the task of making a personalized recommendation with user-item
interaction data. The idea that the mapping between the latent user or item
factors and the original features is highly nonlinear suggest that classical
matrix factorization techniques are no longer sufficient. In this paper, we
propose a multilayer nonlinear semi-nonnegative matrix factorization method,
with the motivation that user-item interactions can be modeled more accurately
using a linear combination of non-linear item features. Firstly, we learn
latent factors for representations of users and items from the designed
multilayer nonlinear Semi-NMF approach using explicit ratings. Secondly, the
architecture built is compared with deep-learning algorithms like Restricted
Boltzmann Machine and state-of-the-art Deep Matrix factorization techniques. By
using both supervised rate prediction task and unsupervised clustering in
latent item space, we demonstrate that our proposed approach achieves better
generalization ability in prediction as well as comparable representation
ability as deep matrix factorization in the clustering task.Comment: version
- β¦