385 research outputs found
Tag-Aware Recommender Systems: A State-of-the-art Survey
In the past decade, Social Tagging Systems have attracted increasing
attention from both physical and computer science communities. Besides the
underlying structure and dynamics of tagging systems, many efforts have been
addressed to unify tagging information to reveal user behaviors and
preferences, extract the latent semantic relations among items, make
recommendations, and so on. Specifically, this article summarizes recent
progress about tag-aware recommender systems, emphasizing on the contributions
from three mainstream perspectives and approaches: network-based methods,
tensor-based methods, and the topic-based methods. Finally, we outline some
other tag-related works and future challenges of tag-aware recommendation
algorithms.Comment: 19 pages, 3 figure
Hypergraph Neural Networks
In this paper, we present a hypergraph neural networks (HGNN) framework for
data representation learning, which can encode high-order data correlation in a
hypergraph structure. Confronting the challenges of learning representation for
complex data in real practice, we propose to incorporate such data structure in
a hypergraph, which is more flexible on data modeling, especially when dealing
with complex data. In this method, a hyperedge convolution operation is
designed to handle the data correlation during representation learning. In this
way, traditional hypergraph learning procedure can be conducted using hyperedge
convolution operations efficiently. HGNN is able to learn the hidden layer
representation considering the high-order data structure, which is a general
framework considering the complex data correlations. We have conducted
experiments on citation network classification and visual object recognition
tasks and compared HGNN with graph convolutional networks and other traditional
methods. Experimental results demonstrate that the proposed HGNN method
outperforms recent state-of-the-art methods. We can also reveal from the
results that the proposed HGNN is superior when dealing with multi-modal data
compared with existing methods.Comment: Accepted in AAAI'201
HyperLearn: A Distributed Approach for Representation Learning in Datasets With Many Modalities
Multimodal datasets contain an enormous amount of relational information,
which grows exponentially with the introduction of new modalities. Learning
representations in such a scenario is inherently complex due to the presence of
multiple heterogeneous information channels. These channels can encode both (a)
inter-relations between the items of different modalities and (b)
intra-relations between the items of the same modality. Encoding multimedia
items into a continuous low-dimensional semantic space such that both types of
relations are captured and preserved is extremely challenging, especially if
the goal is a unified end-to-end learning framework. The two key challenges
that need to be addressed are: 1) the framework must be able to merge complex
intra and inter relations without losing any valuable information and 2) the
learning model should be invariant to the addition of new and potentially very
different modalities. In this paper, we propose a flexible framework which can
scale to data streams from many modalities. To that end we introduce a
hypergraph-based model for data representation and deploy Graph Convolutional
Networks to fuse relational information within and across modalities. Our
approach provides an efficient solution for distributing otherwise extremely
computationally expensive or even unfeasible training processes across
multiple-GPUs, without any sacrifices in accuracy. Moreover, adding new
modalities to our model requires only an additional GPU unit keeping the
computational time unchanged, which brings representation learning to truly
multimodal datasets. We demonstrate the feasibility of our approach in the
experiments on multimedia datasets featuring second, third and fourth order
relations
Socializing the Semantic Gap: A Comparative Survey on Image Tag Assignment, Refinement and Retrieval
Where previous reviews on content-based image retrieval emphasize on what can
be seen in an image to bridge the semantic gap, this survey considers what
people tag about an image. A comprehensive treatise of three closely linked
problems, i.e., image tag assignment, refinement, and tag-based image retrieval
is presented. While existing works vary in terms of their targeted tasks and
methodology, they rely on the key functionality of tag relevance, i.e.
estimating the relevance of a specific tag with respect to the visual content
of a given image and its social context. By analyzing what information a
specific method exploits to construct its tag relevance function and how such
information is exploited, this paper introduces a taxonomy to structure the
growing literature, understand the ingredients of the main works, clarify their
connections and difference, and recognize their merits and limitations. For a
head-to-head comparison between the state-of-the-art, a new experimental
protocol is presented, with training sets containing 10k, 100k and 1m images
and an evaluation on three test sets, contributed by various research groups.
Eleven representative works are implemented and evaluated. Putting all this
together, the survey aims to provide an overview of the past and foster
progress for the near future.Comment: to appear in ACM Computing Survey
Structural Deep Embedding for Hyper-Networks
Network embedding has recently attracted lots of attentions in data mining.
Existing network embedding methods mainly focus on networks with pairwise
relationships. In real world, however, the relationships among data points
could go beyond pairwise, i.e., three or more objects are involved in each
relationship represented by a hyperedge, thus forming hyper-networks. These
hyper-networks pose great challenges to existing network embedding methods when
the hyperedges are indecomposable, that is to say, any subset of nodes in a
hyperedge cannot form another hyperedge. These indecomposable hyperedges are
especially common in heterogeneous networks. In this paper, we propose a novel
Deep Hyper-Network Embedding (DHNE) model to embed hyper-networks with
indecomposable hyperedges. More specifically, we theoretically prove that any
linear similarity metric in embedding space commonly used in existing methods
cannot maintain the indecomposibility property in hyper-networks, and thus
propose a new deep model to realize a non-linear tuplewise similarity function
while preserving both local and global proximities in the formed embedding
space. We conduct extensive experiments on four different types of
hyper-networks, including a GPS network, an online social network, a drug
network and a semantic network. The empirical results demonstrate that our
method can significantly and consistently outperform the state-of-the-art
algorithms.Comment: Accepted by AAAI 1
Learning View-Model Joint Relevance for 3D Object Retrieval
3D object retrieval has attracted extensive research efforts and become an important task in recent years. It is noted that how to measure the relevance between 3D objects is still a difficult issue. Most of the existing methods employ just the model-based or view-based approaches, which may lead to incomplete information for 3D object representation. In this paper, we propose to jointly learn the view-model relevance among 3D objects for retrieval, in which the 3D objects are formulated in different graph structures. With the view information, the multiple views of 3D objects are employed to formulate the 3D object relationship in an object hypergraph structure. With the model data, the model-based features are extracted to construct an object graph to describe the relationship among the 3D objects. The learning on the two graphs is conducted to estimate the relevance among the 3D objects, in which the view/model graph weights can be also optimized in the learning process. This is the first work to jointly explore the view-based and model-based relevance among the 3D objects in a graph-based framework. The proposed method has been evaluated in three data sets. The experimental results and comparison with the state-of-the-art methods demonstrate the effectiveness on retrieval accuracy of the proposed 3D object retrieval method
- …