2 research outputs found
Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a
handful of labeled data to the remaining massive unlabeled data via a graph. As
one of the most popular graph-based SSL approaches, the recently proposed Graph
Convolutional Networks (GCNs) have gained remarkable progress by combining the
sound expressiveness of neural networks with graph structure. Nevertheless, the
existing graph-based methods do not directly address the core problem of SSL,
i.e., the shortage of supervision, and thus their performances are still very
limited. To accommodate this issue, a novel GCN-based SSL algorithm is
presented in this paper to enrich the supervision signals by utilizing both
data similarities and graph structure. Firstly, by designing a semi-supervised
contrastive loss, improved node representations can be generated via maximizing
the agreement between different views of the same data or the data from the
same class. Therefore, the rich unlabeled data and the scarce yet valuable
labeled data can jointly provide abundant supervision information for learning
discriminative node representations, which helps improve the subsequent
classification result. Secondly, the underlying determinative relationship
between the data features and input graph topology is extracted as
supplementary supervision signals for SSL via using a graph generative loss
related to the input features. Intensive experimental results on a variety of
real-world datasets firmly verify the effectiveness of our algorithm compared
with other state-of-the-art methods
Regression with Sensor Data Containing Incomplete Observations
This paper addresses a regression problem in which output label values are
the results of sensing the magnitude of a phenomenon. A low value of such
labels can mean either that the actual magnitude of the phenomenon was low or
that the sensor made an incomplete observation. This leads to a bias toward
lower values in labels and its resultant learning because labels may have lower
values due to incomplete observations, even if the actual magnitude of the
phenomenon was high. Moreover, because an incomplete observation does not
provide any tags indicating incompleteness, we cannot eliminate or impute them.
To address this issue, we propose a learning algorithm that explicitly models
incomplete observations corrupted with an asymmetric noise that always has a
negative value. We show that our algorithm is unbiased as if it were learned
from uncorrupted data that does not involve incomplete observations. We
demonstrate the advantages of our algorithm through numerical experiments