167,529 research outputs found
Multi-graph learning
University of Technology Sydney. Faculty of Engineering and Information Technology.Multi-instance learning (MIL) is a special learning task where labels are only available for a bag of instances. Although MIL has been used for many applications, existing MIL algorithms cannot handle complex data objects, and all require that instances inside each bag are represented as feature vectors (e.g. being represented in an instance-feature format). In reality, many real-world objects are inherently complicated, and an object can be represented as multiple instances with dependency structures (i.e. graphs). Such dependency allows relationships between objects to play important roles, which, unfortunately, remain unaddressed in traditional instance-feature representations. Motivated by the challenges, this thesis formulates a new multi-graph learning paradigm for representing and classifying complicated objects. With the proposed multi-graph representation, the thesis systematically addresses several key learning tasks, including
Multi-Graph Learning: A graph bag contains one or multiple graphs, and each bag is labeled as either positive or negative. The aim of multi-graph learning is to build a learning model from a number of labeled training bags to predict previously unseen bags with maximum accuracy. To solve the problem, we propose two types of approaches: 1) Multi-Graph Feature based Learning (gMGFL) algorithm that explores and selects an optimal set of subgraphs as features to transfer each bag into a single instance for further learning; and 2) Boosting based Multi-Graph Classification framework (bMGC), which employs dynamic weight adjustment, at both graph- and bag-levels, to select one subgraph in each iteration to form a set of weak graph classifiers.
Multi-Instance Multi-Graph learning: A bag contains a number of instances and graphs in pairs, and the learning objective is to derive classification models from labeled bags, containing both instances and graphs, to predict previously unseen bags with maximum accuracy. In the thesis, we propose a Dual Embedding Multi-Instance Multi-Graph Learning (DE-MIMG) algorithm, which employs a dual embedding learning approach to (1) embed instance distributions into the informative subgraphs discovery process, and (2) embed discovered subgraphs into the instance feature selection process.
Positive and Unlabeled Multi-Graph Learning: The training set only contains positive and unlabeled bags, where labels are only available for bags but not for individual graphs inside the bag. This problem setting raises significant challenges because bag-of-graph setting does not have features available to directly represent graph data, and no negative bags exits for deriving discriminative classification models. To solve the challenge, we propose a puMGL learning framework which relies on two iteratively combined processes: (1) deriving features to represent graphs for learning; and (2) deriving discriminative models with only positive and unlabeled graph bags.
Multi-Graph-View Learning: A multi-graph-view model utilizes graphs constructed from multiple graph-views to represent an object. In our research, we formulate a new multi-graph-view learning task for graph classification, where each object to be classified is represented graphs under multi-graph-view. To solve the problem, we propose a Cross Graph-View Subgraph Feature based Learning (gCGVFL) algorithm that explores an optimal set of subgraph features cross multiple graph-views. In addition, a bag based multi-graph model is further used to relax the labeling by only requiring one label for each graph bag, which corresponds to one object. For learning classification models, we propose a multi-graph-view bag learning algorithm (MGVBL), to explore subgraphs from multiple graph-views for learning.
Experiments on real-world data validate and demonstrate the performance of proposed methods for classifying complicated objects using multi-graph learning
Multi-graph-view learning for complicated object classification
In this paper, we propose to represent and classify complicated objects. In order to represent the objects, we propose a multi-graph-view model which uses graphs constructed from multiple graph-views to represent an object. In addition, a bag based multi-graph model is further used to relax labeling by only requiring one label for a bag of graphs, which represent one object. In order to learn classification models, we propose a multi-graph-view bag learning algorithm (MGVBL), which aims to explore subgraph features from multiple graphviews for learning. By enabling a joint regularization across multiple graph-views, and enforcing labeling constraints at the bag and graph levels, MGVBL is able to discover most effective subgraph features across all graph-views for learning. Experiments on real-world learning tasks demonstrate the performance of MGVBL for complicated object classification
Multi-graph-view subgraph mining for graph classification
© 2015, Springer-Verlag London. In this paper, we formulate a new multi-graph-view learning task, where each object to be classified contains graphs from multiple graph-views. This problem setting is essentially different from traditional single-graph-view graph classification, where graphs are collected from one single-feature view. To solve the problem, we propose a cross graph-view subgraph feature-based learning algorithm that explores an optimal set of subgraphs, across multiple graph-views, as features to represent graphs. Specifically, we derive an evaluation criterion to estimate the discriminative power and redundancy of subgraph features across all views, with a branch-and-bound algorithm being proposed to prune subgraph search space. Because graph-views may complement each other and play different roles in a learning task, we assign each view with a weight value indicating its importance to the learning task and further use an optimization process to find optimal weight values for each graph-view. The iteration between cross graph-view subgraph scoring and graph-view weight updating forms a closed loop to find optimal subgraphs to represent graphs for multi-graph-view learning. Experiments and comparisons on real-world tasks demonstrate the algorithm’s superior performance
Multi-view Graph Convolutional Networks with Differentiable Node Selection
Multi-view data containing complementary and consensus information can
facilitate representation learning by exploiting the intact integration of
multi-view features. Because most objects in real world often have underlying
connections, organizing multi-view data as heterogeneous graphs is beneficial
to extracting latent information among different objects. Due to the powerful
capability to gather information of neighborhood nodes, in this paper, we apply
Graph Convolutional Network (GCN) to cope with heterogeneous-graph data
originating from multi-view data, which is still under-explored in the field of
GCN. In order to improve the quality of network topology and alleviate the
interference of noises yielded by graph fusion, some methods undertake sorting
operations before the graph convolution procedure. These GCN-based methods
generally sort and select the most confident neighborhood nodes for each
vertex, such as picking the top-k nodes according to pre-defined confidence
values. Nonetheless, this is problematic due to the non-differentiable sorting
operators and inflexible graph embedding learning, which may result in blocked
gradient computations and undesired performance. To cope with these issues, we
propose a joint framework dubbed Multi-view Graph Convolutional Network with
Differentiable Node Selection (MGCN-DNS), which is constituted of an adaptive
graph fusion layer, a graph learning module and a differentiable node selection
schema. MGCN-DNS accepts multi-channel graph-structural data as inputs and aims
to learn more robust graph fusion through a differentiable neural network. The
effectiveness of the proposed method is verified by rigorous comparisons with
considerable state-of-the-art approaches in terms of multi-view semi-supervised
classification tasks
Multi-view multi-instance learning based on joint sparse representation and multi-view dictionary learning
In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many
applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall
performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view
multi-instance learning algorithm (M2IL) that combines multiple context structures in a bag into a unified framework. The novel aspects
are: (i) we propose a sparse "-graph model that can generate different graphs with different parameters to represent various context
relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag
classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues
from all views simultaneously to improve the discrimination of the M2IL. Experiments and analyses in many practical applications prove
the effectiveness of the M2IL
Towards Long-Tailed Recognition for Graph Classification via Collaborative Experts
Graph classification, aiming at learning the graph-level representations for
effective class assignments, has received outstanding achievements, which
heavily relies on high-quality datasets that have balanced class distribution.
In fact, most real-world graph data naturally presents a long-tailed form,
where the head classes occupy much more samples than the tail classes, it thus
is essential to study the graph-level classification over long-tailed data
while still remaining largely unexplored. However, most existing long-tailed
learning methods in visions fail to jointly optimize the representation
learning and classifier training, as well as neglect the mining of the
hard-to-classify classes. Directly applying existing methods to graphs may lead
to sub-optimal performance, since the model trained on graphs would be more
sensitive to the long-tailed distribution due to the complex topological
characteristics. Hence, in this paper, we propose a novel long-tailed
graph-level classification framework via Collaborative Multi-expert Learning
(CoMe) to tackle the problem. To equilibrate the contributions of head and tail
classes, we first develop balanced contrastive learning from the view of
representation learning, and then design an individual-expert classifier
training based on hard class mining. In addition, we execute gated fusion and
disentangled knowledge distillation among the multiple experts to promote the
collaboration in a multi-expert framework. Comprehensive experiments are
performed on seven widely-used benchmark datasets to demonstrate the
superiority of our method CoMe over state-of-the-art baselines.Comment: Accepted by IEEE Transactions on Big Data (TBD 2024
- …