1,911 research outputs found
A deep matrix factorization method for learning attribute representations
Semi-Non-negative Matrix Factorization is a technique that learns a
low-dimensional representation of a dataset that lends itself to a clustering
interpretation. It is possible that the mapping between this new representation
and our original data matrix contains rather complex hierarchical information
with implicit lower-level hidden attributes, that classical one level
clustering methodologies can not interpret. In this work we propose a novel
model, Deep Semi-NMF, that is able to learn such hidden representations that
allow themselves to an interpretation of clustering according to different,
unknown attributes of a given dataset. We also present a semi-supervised
version of the algorithm, named Deep WSF, that allows the use of (partial)
prior information for each of the known attributes of a dataset, that allows
the model to be used on datasets with mixed attribute knowledge. Finally, we
show that our models are able to learn low-dimensional representations that are
better suited for clustering, but also classification, outperforming
Semi-Non-negative Matrix Factorization, but also other state-of-the-art
methodologies variants.Comment: Submitted to TPAMI (16-Mar-2015
Is Simple Better? Revisiting Non-linear Matrix Factorization for Learning Incomplete Ratings
Matrix factorization techniques have been widely used as a method for
collaborative filtering for recommender systems. In recent times, different
variants of deep learning algorithms have been explored in this setting to
improve the task of making a personalized recommendation with user-item
interaction data. The idea that the mapping between the latent user or item
factors and the original features is highly nonlinear suggest that classical
matrix factorization techniques are no longer sufficient. In this paper, we
propose a multilayer nonlinear semi-nonnegative matrix factorization method,
with the motivation that user-item interactions can be modeled more accurately
using a linear combination of non-linear item features. Firstly, we learn
latent factors for representations of users and items from the designed
multilayer nonlinear Semi-NMF approach using explicit ratings. Secondly, the
architecture built is compared with deep-learning algorithms like Restricted
Boltzmann Machine and state-of-the-art Deep Matrix factorization techniques. By
using both supervised rate prediction task and unsupervised clustering in
latent item space, we demonstrate that our proposed approach achieves better
generalization ability in prediction as well as comparable representation
ability as deep matrix factorization in the clustering task.Comment: version
Group invariance principles for causal generative models
The postulate of independence of cause and mechanism (ICM) has recently led
to several new causal discovery algorithms. The interpretation of independence
and the way it is utilized, however, varies across these methods. Our aim in
this paper is to propose a group theoretic framework for ICM to unify and
generalize these approaches. In our setting, the cause-mechanism relationship
is assessed by comparing it against a null hypothesis through the application
of random generic group transformations. We show that the group theoretic view
provides a very general tool to study the structure of data generating
mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure
Node Embedding over Temporal Graphs
In this work, we present a method for node embedding in temporal graphs. We
propose an algorithm that learns the evolution of a temporal graph's nodes and
edges over time and incorporates this dynamics in a temporal node embedding
framework for different graph prediction tasks. We present a joint loss
function that creates a temporal embedding of a node by learning to combine its
historical temporal embeddings, such that it optimizes per given task (e.g.,
link prediction). The algorithm is initialized using static node embeddings,
which are then aligned over the representations of a node at different time
points, and eventually adapted for the given task in a joint optimization. We
evaluate the effectiveness of our approach over a variety of temporal graphs
for the two fundamental tasks of temporal link prediction and multi-label node
classification, comparing to competitive baselines and algorithmic
alternatives. Our algorithm shows performance improvements across many of the
datasets and baselines and is found particularly effective for graphs that are
less cohesive, with a lower clustering coefficient
Finding the global semantic representation in GAN through Frechet Mean
The ideally disentangled latent space in GAN involves the global
representation of latent space with semantic attribute coordinates. In other
words, considering that this disentangled latent space is a vector space, there
exists the global semantic basis where each basis component describes one
attribute of generated images. In this paper, we propose an unsupervised method
for finding this global semantic basis in the intermediate latent space in
GANs. This semantic basis represents sample-independent meaningful
perturbations that change the same semantic attribute of an image on the entire
latent space. The proposed global basis, called Fr\'echet basis, is derived by
introducing Fr\'echet mean to the local semantic perturbations in a latent
space. Fr\'echet basis is discovered in two stages. First, the global semantic
subspace is discovered by the Fr\'echet mean in the Grassmannian manifold of
the local semantic subspaces. Second, Fr\'echet basis is found by optimizing a
basis of the semantic subspace via the Fr\'echet mean in the Special Orthogonal
Group. Experimental results demonstrate that Fr\'echet basis provides better
semantic factorization and robustness compared to the previous methods.
Moreover, we suggest the basis refinement scheme for the previous methods. The
quantitative experiments show that the refined basis achieves better semantic
factorization while constrained on the same semantic subspace given by the
previous method.Comment: 25 pages, 21 figure
- …