255 research outputs found
Hypergraph Learning with Line Expansion
Previous hypergraph expansions are solely carried out on either vertex level
or hyperedge level, thereby missing the symmetric nature of data co-occurrence,
and resulting in information loss. To address the problem, this paper treats
vertices and hyperedges equally and proposes a new hypergraph formulation named
the \emph{line expansion (LE)} for hypergraphs learning. The new expansion
bijectively induces a homogeneous structure from the hypergraph by treating
vertex-hyperedge pairs as "line nodes". By reducing the hypergraph to a simple
graph, the proposed \emph{line expansion} makes existing graph learning
algorithms compatible with the higher-order structure and has been proven as a
unifying framework for various hypergraph expansions. We evaluate the proposed
line expansion on five hypergraph datasets, the results show that our method
beats SOTA baselines by a significant margin
Topological Deep Learning: Going Beyond Graph Data
Topological deep learning is a rapidly growing field that pertains to the
development of deep learning models for data supported on topological domains
such as simplicial complexes, cell complexes, and hypergraphs, which generalize
many domains encountered in scientific computations. In this paper, we present
a unifying deep learning framework built upon a richer data structure that
includes widely adopted topological domains.
Specifically, we first introduce combinatorial complexes, a novel type of
topological domain. Combinatorial complexes can be seen as generalizations of
graphs that maintain certain desirable properties. Similar to hypergraphs,
combinatorial complexes impose no constraints on the set of relations. In
addition, combinatorial complexes permit the construction of hierarchical
higher-order relations, analogous to those found in simplicial and cell
complexes. Thus, combinatorial complexes generalize and combine useful traits
of both hypergraphs and cell complexes, which have emerged as two promising
abstractions that facilitate the generalization of graph neural networks to
topological spaces.
Second, building upon combinatorial complexes and their rich combinatorial
and algebraic structure, we develop a general class of message-passing
combinatorial complex neural networks (CCNNs), focusing primarily on
attention-based CCNNs. We characterize permutation and orientation
equivariances of CCNNs, and discuss pooling and unpooling operations within
CCNNs in detail.
Third, we evaluate the performance of CCNNs on tasks related to mesh shape
analysis and graph learning. Our experiments demonstrate that CCNNs have
competitive performance as compared to state-of-the-art deep learning models
specifically tailored to the same tasks. Our findings demonstrate the
advantages of incorporating higher-order relations into deep learning models in
different applications
Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks
The variety and complexity of relations in multimedia data lead to
Heterogeneous Information Networks (HINs). Capturing the semantics from such
networks requires approaches capable of utilizing the full richness of the
HINs. Existing methods for modeling HINs employ techniques originally designed
for graph neural networks, and HINs decomposition analysis, like using manually
predefined metapaths. In this paper, we introduce a novel prototype-enhanced
hypergraph learning approach for node classification in HINs. Using hypergraphs
instead of graphs, our method captures higher-order relationships among nodes
and extracts semantic information without relying on metapaths. Our method
leverages the power of prototypes to improve the robustness of the hypergraph
learning process and creates the potential to provide human-interpretable
insights into the underlying network structure. Extensive experiments on three
real-world HINs demonstrate the effectiveness of our method
Recommended from our members
DATA-DRIVEN APPROACH TO IMAGE CLASSIFICATION
Image classification has been a core topic in the computer vision community. Its recent success with convolutional neural network (CNN) algorithm has led to various real world applications such as large scale management of photos/videos on cloud/social-media, image based search for online retailers, self-driving cars, building robots and healthcare. Image classification can be broadly categorized into binary, multi-class and multi-label classification problems. Binary classification involves assigning one of the two class labels to an instance. In multi-class classification problem, an instance should be categorized into one of more than two classes. Multi-label classification is a generalized version of the multi-class classification problem where each image is assigned multiple labels as opposed to a single label.
In this work, we first present various methods that take advantage of deep representations (fully connected layer of pre-trained CNN on the ImageNet dataset) and yield better performance on multi-label classification when compared to methods that use over a dozen conventional visual features. Following the success of deep representations, we intend to build a generic end-to-end deep learning framework to address all three problem categories of image classification. However, there are still no well established guidelines (in terms of choosing the number of layers to go deeper, the number of kernels and the size, the type of regularizer, the choice of non-linear function, etc.) to build an efficient deep neural network and often network architecture design is specific to a problem/dataset. Hence, we present some initial efforts in building a computational framework called Deep Decision Network (DDN) which is completely data-driven. DDN is a tree-like structured built stage-wise. During the learning phase, starting from the root network node, DDN automatically builds a network that splits the data into disjoint clusters of classes which would be handled by the subsequent expert networks. This results in a tree-like structured network driven by the data. The proposed approach provides an insight into the data by identifying the group of classes that are hard to classify and require more attention when compared to others. This feature is crucial for people trying to solve the problem with little or no domain knowledge, especially for applications in medical domain. Initially, we evaluate DDN on a binary classification problem and later extend it to more challenging multi-class and multi-label classification problems. The extension of DDN to multi-class and multi-label involves some changes but they still operate under the same underlying principle. In all the three cases, the proposed approach is tested for its recognition performance and scalability on publicly available datasets providing comparison to other methods
UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification
Graph and hypergraph representation learning has attracted increasing
attention from various research fields. Despite the decent performance and
fruitful applications of Graph Neural Networks (GNNs), Hypergraph Neural
Networks (HGNNs), and their well-designed variants, on some commonly used
benchmark graphs and hypergraphs, they are outperformed by even a simple
Multi-Layer Perceptron. This observation motivates a reexamination of the
design paradigm of the current GNNs and HGNNs and poses challenges of
extracting graph features effectively. In this work, a universal feature
encoder for both graph and hypergraph representation learning is designed,
called UniG-Encoder. The architecture starts with a forward transformation of
the topological relationships of connected nodes into edge or hyperedge
features via a normalized projection matrix. The resulting edge/hyperedge
features, together with the original node features, are fed into a neural
network. The encoded node embeddings are then derived from the reversed
transformation, described by the transpose of the projection matrix, of the
network's output, which can be further used for tasks such as node
classification. The proposed architecture, in contrast to the traditional
spectral-based and/or message passing approaches, simultaneously and
comprehensively exploits the node features and graph/hypergraph topologies in
an efficient and unified manner, covering both heterophilic and homophilic
graphs. The designed projection matrix, encoding the graph features, is
intuitive and interpretable. Extensive experiments are conducted and
demonstrate the superior performance of the proposed framework on twelve
representative hypergraph datasets and six real-world graph datasets, compared
to the state-of-the-art methods. Our implementation is available online at
https://github.com/MinhZou/UniG-Encoder
Search Behavior Prediction: A Hypergraph Perspective
Although the bipartite shopping graphs are straightforward to model search
behavior, they suffer from two challenges: 1) The majority of items are
sporadically searched and hence have noisy/sparse query associations, leading
to a \textit{long-tail} distribution. 2) Infrequent queries are more likely to
link to popular items, leading to another hurdle known as
\textit{disassortative mixing}. To address these two challenges, we go beyond
the bipartite graph to take a hypergraph perspective, introducing a new
paradigm that leverages \underline{auxiliary} information from anonymized
customer engagement sessions to assist the \underline{main task} of query-item
link prediction. This auxiliary information is available at web scale in the
form of search logs. We treat all items appearing in the same customer session
as a single hyperedge. The hypothesis is that items in a customer session are
unified by a common shopping interest. With these hyperedges, we augment the
original bipartite graph into a new \textit{hypergraph}. We develop a
\textit{\textbf{D}ual-\textbf{C}hannel \textbf{A}ttention-Based
\textbf{H}ypergraph Neural Network} (\textbf{DCAH}), which synergizes
information from two potentially noisy sources (original query-item edges and
item-item hyperedges). In this way, items on the tail are better connected due
to the extra hyperedges, thereby enhancing their link prediction performance.
We further integrate DCAH with self-supervised graph pre-training and/or
DropEdge training, both of which effectively alleviate disassortative mixing.
Extensive experiments on three proprietary E-Commerce datasets show that DCAH
yields significant improvements of up to \textbf{24.6\% in mean reciprocal rank
(MRR)} and \textbf{48.3\% in recall} compared to GNN-based baselines. Our
source code is available at
\url{https://github.com/amazon-science/dual-channel-hypergraph-neural-network}.Comment: WSDM 202
- …