2,912 research outputs found
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible
writin
On Sampling Strategies for Neural Network-based Collaborative Filtering
Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework.Comment: This is a longer version (with supplementary attached) of the KDD'17
pape
Neural‑Brane: Neural Bayesian Personalized Ranking for Attributed Network Embedding
Network embedding methodologies, which learn a distributed vector representation for each vertex in a network, have attracted considerable interest in recent years. Existing works have demonstrated that vertex representation learned through an embedding method provides superior performance in many real-world applications, such as node classification, link prediction, and community detection. However, most of the existing methods for network embedding only utilize topological information of a vertex, ignoring a rich set of nodal attributes (such as user profiles of an online social network, or textual contents of a citation network), which is abundant in all real-life networks. A joint network embedding that takes into account both attributional and relational information entails a complete network information and could further enrich the learned vector representations. In this work, we present Neural-Brane, a novel Neural Bayesian Personalized Ranking based Attributed Network Embedding. For a given network, Neural-Brane extracts latent feature representation of its vertices using a designed neural network model that unifies network topological information and nodal attributes. Besides, it utilizes Bayesian personalized ranking objective, which exploits the proximity ordering between a similar node pair and a dissimilar node pair. We evaluate the quality of vertex embedding produced by Neural-Brane by solving the node classification and clustering tasks on four real-world datasets. Experimental results demonstrate the superiority of our proposed method over the state-of-the-art existing methods
- …