253,818 research outputs found
Multi-graph learning
University of Technology Sydney. Faculty of Engineering and Information Technology.Multi-instance learning (MIL) is a special learning task where labels are only available for a bag of instances. Although MIL has been used for many applications, existing MIL algorithms cannot handle complex data objects, and all require that instances inside each bag are represented as feature vectors (e.g. being represented in an instance-feature format). In reality, many real-world objects are inherently complicated, and an object can be represented as multiple instances with dependency structures (i.e. graphs). Such dependency allows relationships between objects to play important roles, which, unfortunately, remain unaddressed in traditional instance-feature representations. Motivated by the challenges, this thesis formulates a new multi-graph learning paradigm for representing and classifying complicated objects. With the proposed multi-graph representation, the thesis systematically addresses several key learning tasks, including
Multi-Graph Learning: A graph bag contains one or multiple graphs, and each bag is labeled as either positive or negative. The aim of multi-graph learning is to build a learning model from a number of labeled training bags to predict previously unseen bags with maximum accuracy. To solve the problem, we propose two types of approaches: 1) Multi-Graph Feature based Learning (gMGFL) algorithm that explores and selects an optimal set of subgraphs as features to transfer each bag into a single instance for further learning; and 2) Boosting based Multi-Graph Classification framework (bMGC), which employs dynamic weight adjustment, at both graph- and bag-levels, to select one subgraph in each iteration to form a set of weak graph classifiers.
Multi-Instance Multi-Graph learning: A bag contains a number of instances and graphs in pairs, and the learning objective is to derive classification models from labeled bags, containing both instances and graphs, to predict previously unseen bags with maximum accuracy. In the thesis, we propose a Dual Embedding Multi-Instance Multi-Graph Learning (DE-MIMG) algorithm, which employs a dual embedding learning approach to (1) embed instance distributions into the informative subgraphs discovery process, and (2) embed discovered subgraphs into the instance feature selection process.
Positive and Unlabeled Multi-Graph Learning: The training set only contains positive and unlabeled bags, where labels are only available for bags but not for individual graphs inside the bag. This problem setting raises significant challenges because bag-of-graph setting does not have features available to directly represent graph data, and no negative bags exits for deriving discriminative classification models. To solve the challenge, we propose a puMGL learning framework which relies on two iteratively combined processes: (1) deriving features to represent graphs for learning; and (2) deriving discriminative models with only positive and unlabeled graph bags.
Multi-Graph-View Learning: A multi-graph-view model utilizes graphs constructed from multiple graph-views to represent an object. In our research, we formulate a new multi-graph-view learning task for graph classification, where each object to be classified is represented graphs under multi-graph-view. To solve the problem, we propose a Cross Graph-View Subgraph Feature based Learning (gCGVFL) algorithm that explores an optimal set of subgraph features cross multiple graph-views. In addition, a bag based multi-graph model is further used to relax the labeling by only requiring one label for each graph bag, which corresponds to one object. For learning classification models, we propose a multi-graph-view bag learning algorithm (MGVBL), to explore subgraphs from multiple graph-views for learning.
Experiments on real-world data validate and demonstrate the performance of proposed methods for classifying complicated objects using multi-graph learning
Image Clustering with Contrastive Learning and Multi-scale Graph Convolutional Networks
Deep clustering has recently attracted significant attention. Despite the
remarkable progress, most of the previous deep clustering works still suffer
from two limitations. First, many of them focus on some distribution-based
clustering loss, lacking the ability to exploit sample-wise (or
augmentation-wise) relationships via contrastive learning. Second, they often
neglect the indirect sample-wise structure information, overlooking the rich
possibilities of multi-scale neighborhood structure learning. In view of this,
this paper presents a new deep clustering approach termed Image clustering with
contrastive learning and multi-scale Graph Convolutional Networks (IcicleGCN),
which bridges the gap between convolutional neural network (CNN) and graph
convolutional network (GCN) as well as the gap between contrastive learning and
multi-scale neighborhood structure learning for the image clustering task. The
proposed IcicleGCN framework consists of four main modules, namely, the
CNN-based backbone, the Instance Similarity Module (ISM), the Joint Cluster
Structure Learning and Instance reconstruction Module (JC-SLIM), and the
Multi-scale GCN module (M-GCN). Specifically, with two random augmentations
performed on each image, the backbone network with two weight-sharing views is
utilized to learn the representations for the augmented samples, which are then
fed to ISM and JC-SLIM for instance-level and cluster-level contrastive
learning, respectively. Further, to enforce multi-scale neighborhood structure
learning, two streams of GCNs and an auto-encoder are simultaneously trained
via (i) the layer-wise interaction with representation fusion and (ii) the
joint self-adaptive learning that ensures their last-layer output distributions
to be consistent. Experiments on multiple image datasets demonstrate the
superior clustering performance of IcicleGCN over the state-of-the-art
Supervised Learning in Time-dependent Environments with Performance Guarantees
In practical scenarios, it is common to learn from a sequence of related problems (tasks).
Such tasks are usually time-dependent in the sense that consecutive tasks are often
significantly more similar. Time-dependency is common in multiple applications such
as load forecasting, spam main filtering, and face emotion recognition. For instance, in
the problem of load forecasting, the consumption patterns in consecutive time periods
are significantly more similar since human habits and weather factors change gradually
over time. Learning from a sequence tasks holds promise to enable accurate performance
even with few samples per task by leveraging information from different tasks. However,
harnessing the benefits of learning from a sequence of tasks is challenging since tasks
are characterized by different underlying distributions.
Most existing techniques are designed for situations where the tasksâ similarities
do not depend on their order in the sequence. Existing techniques designed for timedependent
tasks adapt to changes between consecutive tasks accounting for a scalar
rate of change by using a carefully chosen parameter such as a learning rate or a weight
factor. However, the tasksâ changes are commonly multidimensional, i.e., the timedependency
often varies across different statistical characteristics describing the tasks.
For instance, in the problem of load forecasting, the statistical characteristics related
to weather factors often change differently from those related to generation.
In this dissertation, we establish methodologies for supervised learning from a sequence
of time-dependent tasks that effectively exploit information from all tasks,
provide multidimensional adaptation to tasksâ changes, and provide computable tight
performance guarantees. We develop methods for supervised learning settings where
tasks arrive over time including techniques for supervised classification under concept
drift (SCD) and techniques for continual learning (CL). In addition, we present techniques
for load forecasting that can adapt to time changes in consumption patterns
and assess intrinsic uncertainties in load demand. The numerical results show that the
proposed methodologies can significantly improve the performance of existing methods
using multiple benchmark datasets. This dissertation makes theoretical contributions
leading to efficient algorithms for multiple machine learning scenarios that provide computable
performance guarantees and superior performance than state-of-the-art techniques
Multiple Instance Learning: A Survey of Problem Characteristics and Applications
Multiple instance learning (MIL) is a form of weakly supervised learning
where training instances are arranged in sets, called bags, and a label is
provided for the entire bag. This formulation is gaining interest because it
naturally fits various problems and allows to leverage weakly labeled data.
Consequently, it has been used in diverse application fields such as computer
vision and document classification. However, learning from bags raises
important challenges that are unique to MIL. This paper provides a
comprehensive survey of the characteristics which define and differentiate the
types of MIL problems. Until now, these problem characteristics have not been
formally identified and described. As a result, the variations in performance
of MIL algorithms from one data set to another are difficult to explain. In
this paper, MIL problem characteristics are grouped into four broad categories:
the composition of the bags, the types of data distribution, the ambiguity of
instance labels, and the task to be performed. Methods specialized to address
each category are reviewed. Then, the extent to which these characteristics
manifest themselves in key MIL application areas are described. Finally,
experiments are conducted to compare the performance of 16 state-of-the-art MIL
methods on selected problem characteristics. This paper provides insight on how
the problem characteristics affect MIL algorithms, recommendations for future
benchmarking and promising avenues for research
Learning from Noisy Label Distributions
In this paper, we consider a novel machine learning problem, that is,
learning a classifier from noisy label distributions. In this problem, each
instance with a feature vector belongs to at least one group. Then, instead of
the true label of each instance, we observe the label distribution of the
instances associated with a group, where the label distribution is distorted by
an unknown noise. Our goals are to (1) estimate the true label of each
instance, and (2) learn a classifier that predicts the true label of a new
instance. We propose a probabilistic model that considers true label
distributions of groups and parameters that represent the noise as hidden
variables. The model can be learned based on a variational Bayesian method. In
numerical experiments, we show that the proposed model outperforms existing
methods in terms of the estimation of the true labels of instances.Comment: Accepted in ICANN201
- âŠ