793 research outputs found
Hypergraph Learning with Line Expansion
Previous hypergraph expansions are solely carried out on either vertex level
or hyperedge level, thereby missing the symmetric nature of data co-occurrence,
and resulting in information loss. To address the problem, this paper treats
vertices and hyperedges equally and proposes a new hypergraph formulation named
the \emph{line expansion (LE)} for hypergraphs learning. The new expansion
bijectively induces a homogeneous structure from the hypergraph by treating
vertex-hyperedge pairs as "line nodes". By reducing the hypergraph to a simple
graph, the proposed \emph{line expansion} makes existing graph learning
algorithms compatible with the higher-order structure and has been proven as a
unifying framework for various hypergraph expansions. We evaluate the proposed
line expansion on five hypergraph datasets, the results show that our method
beats SOTA baselines by a significant margin
On The Effect of Hyperedge Weights On Hypergraph Learning
Hypergraph is a powerful representation in several computer vision, machine
learning and pattern recognition problems. In the last decade, many researchers
have been keen to develop different hypergraph models. In contrast, no much
attention has been paid to the design of hyperedge weights. However, many
studies on pairwise graphs show that the choice of edge weight can
significantly influence the performances of such graph algorithms. We argue
that this also applies to hypegraphs. In this paper, we empirically discuss the
influence of hyperedge weight on hypegraph learning via proposing three novel
hyperedge weights from the perspectives of geometry, multivariate statistical
analysis and linear regression. Extensive experiments on ORL, COIL20, JAFFE,
Sheffield, Scene15 and Caltech256 databases verify our hypothesis. Similar to
graph learning, several representative hyperedge weighting schemes can be
concluded by our experimental studies. Moreover, the experiments also
demonstrate that the combinations of such weighting schemes and conventional
hypergraph models can get very promising classification and clustering
performances in comparison with some recent state-of-the-art algorithms
Hypergraph Neural Networks
In this paper, we present a hypergraph neural networks (HGNN) framework for
data representation learning, which can encode high-order data correlation in a
hypergraph structure. Confronting the challenges of learning representation for
complex data in real practice, we propose to incorporate such data structure in
a hypergraph, which is more flexible on data modeling, especially when dealing
with complex data. In this method, a hyperedge convolution operation is
designed to handle the data correlation during representation learning. In this
way, traditional hypergraph learning procedure can be conducted using hyperedge
convolution operations efficiently. HGNN is able to learn the hidden layer
representation considering the high-order data structure, which is a general
framework considering the complex data correlations. We have conducted
experiments on citation network classification and visual object recognition
tasks and compared HGNN with graph convolutional networks and other traditional
methods. Experimental results demonstrate that the proposed HGNN method
outperforms recent state-of-the-art methods. We can also reveal from the
results that the proposed HGNN is superior when dealing with multi-modal data
compared with existing methods.Comment: Accepted in AAAI'201
Structural Deep Embedding for Hyper-Networks
Network embedding has recently attracted lots of attentions in data mining.
Existing network embedding methods mainly focus on networks with pairwise
relationships. In real world, however, the relationships among data points
could go beyond pairwise, i.e., three or more objects are involved in each
relationship represented by a hyperedge, thus forming hyper-networks. These
hyper-networks pose great challenges to existing network embedding methods when
the hyperedges are indecomposable, that is to say, any subset of nodes in a
hyperedge cannot form another hyperedge. These indecomposable hyperedges are
especially common in heterogeneous networks. In this paper, we propose a novel
Deep Hyper-Network Embedding (DHNE) model to embed hyper-networks with
indecomposable hyperedges. More specifically, we theoretically prove that any
linear similarity metric in embedding space commonly used in existing methods
cannot maintain the indecomposibility property in hyper-networks, and thus
propose a new deep model to realize a non-linear tuplewise similarity function
while preserving both local and global proximities in the formed embedding
space. We conduct extensive experiments on four different types of
hyper-networks, including a GPS network, an online social network, a drug
network and a semantic network. The empirical results demonstrate that our
method can significantly and consistently outperform the state-of-the-art
algorithms.Comment: Accepted by AAAI 1
Learning Hypergraph-regularized Attribute Predictors
We present a novel attribute learning framework named Hypergraph-based
Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the
attribute relations in the data. Then the attribute prediction problem is
casted as a regularized hypergraph cut problem in which HAP jointly learns a
collection of attribute projections from the feature space to a hypergraph
embedding space aligned with the attribute space. The learned projections
directly act as attribute classifiers (linear and kernelized). This formulation
leads to a very efficient approach. By considering our model as a multi-graph
cut task, our framework can flexibly incorporate other available information,
in particular class label. We apply our approach to attribute prediction,
Zero-shot and -shot learning tasks. The results on AWA, USAA and CUB
databases demonstrate the value of our methods in comparison with the
state-of-the-art approaches.Comment: This is an attribute learning paper accepted by CVPR 201
- …