4 research outputs found
Robust Triple-Matrix-Recovery-Based Auto-Weighted Label Propagation for Classification
The graph-based semi-supervised label propagation algorithm has delivered
impressive classification results. However, the estimated soft labels typically
contain mixed signs and noise, which cause inaccurate predictions due to the
lack of suitable constraints. Moreover, available methods typically calculate
the weights and estimate the labels in the original input space, which
typically contains noise and corruption. Thus, the en-coded similarities and
manifold smoothness may be inaccurate for label estimation. In this paper, we
present effective schemes for resolving these issues and propose a novel and
robust semi-supervised classification algorithm, namely, the
tri-ple-matrix-recovery-based robust auto-weighted label propa-gation framework
(ALP-TMR). Our ALP-TMR introduces a triple matrix recovery mechanism to remove
noise or mixed signs from the estimated soft labels and improve the robustness
to noise and outliers in the steps of assigning weights and pre-dicting the
labels simultaneously. Our method can jointly re-cover the underlying clean
data, clean labels and clean weighting spaces by decomposing the original data,
predicted soft labels or weights into a clean part plus an error part by
fitting noise. In addition, ALP-TMR integrates the au-to-weighting process by
minimizing reconstruction errors over the recovered clean data and clean soft
labels, which can en-code the weights more accurately to improve both data
rep-resentation and classification. By classifying samples in the recovered
clean label and weight spaces, one can potentially improve the label prediction
results. The results of extensive experiments demonstrated the satisfactory
performance of our ALP-TMR.Comment: Accepted by IEEE TNNNL
One-bit Supervision for Image Classification
This paper presents one-bit supervision, a novel setting of learning from
incomplete annotations, in the scenario of image classification. Instead of
training a model upon the accurate label of each sample, our setting requires
the model to query with a predicted label of each sample and learn from the
answer whether the guess is correct. This provides one bit (yes or no) of
information, and more importantly, annotating each sample becomes much easier
than finding the accurate label from many candidate classes. There are two keys
to training a model upon one-bit supervision: improving the guess accuracy and
making use of incorrect guesses. For these purposes, we propose a multi-stage
training paradigm which incorporates negative label suppression into an
off-the-shelf semi-supervised learning algorithm. In three popular image
classification benchmarks, our approach claims higher efficiency in utilizing
the limited amount of annotations
Multilayer Collaborative Low-Rank Coding Network for Robust Deep Subspace Discovery
For subspace recovery, most existing low-rank representation (LRR) models
performs in the original space in single-layer mode. As such, the deep
hierarchical information cannot be learned, which may result in inaccurate
recoveries for complex real data. In this paper, we explore the deep
multi-subspace recovery problem by designing a multilayer architecture for
latent LRR. Technically, we propose a new Multilayer Collabora-tive Low-Rank
Representation Network model termed DeepLRR to discover deep features and deep
subspaces. In each layer (>2), DeepLRR bilinearly reconstructs the data matrix
by the collabo-rative representation with low-rank coefficients and projection
matrices in the previous layer. The bilinear low-rank reconstruc-tion of
previous layer is directly fed into the next layer as the input and low-rank
dictionary for representation learning, and is further decomposed into a deep
principal feature part, a deep salient feature part and a deep sparse error. As
such, the coher-ence issue can be also resolved due to the low-rank dictionary,
and the robustness against noise can also be enhanced in the feature subspace.
To recover the sparse errors in layers accurately, a dynamic growing strategy
is used, as the noise level will be-come smaller for the increase of layers.
Besides, a neighborhood reconstruction error is also included to encode the
locality of deep salient features by deep coefficients adaptively in each
layer. Extensive results on public databases show that our DeepLRR outperforms
other related models for subspace discovery and clustering.Comment: Accepted by the 24th European Conference on Artificial Intelligence
(ECAI 2020
NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs
Graph Neural Networks (GNNs) have achieved promising results for
semi-supervised learning tasks on graphs such as node classification. Despite
the great success of GNNs, many real-world graphs are often sparsely and
noisily labeled, which could significantly degrade the performance of GNNs, as
the noisy information could propagate to unlabeled nodes via graph structure.
Thus, it is important to develop a label noise-resistant GNN for
semi-supervised node classification. Though extensive studies have been
conducted to learn neural networks with noisy labels, they mostly focus on
independent and identically distributed data and assume a large number of noisy
labels are available, which are not directly applicable for GNNs. Thus, we
investigate a novel problem of learning a robust GNN with noisy and limited
labels. To alleviate the negative effects of label noise, we propose to link
the unlabeled nodes with labeled nodes of high feature similarity to bring more
clean label information. Furthermore, accurate pseudo labels could be obtained
by this strategy to provide more supervision and further reduce the effects of
label noise. Our theoretical and empirical analysis verify the effectiveness of
these two strategies under mild conditions. Extensive experiments on real-world
datasets demonstrate the effectiveness of the proposed method in learning a
robust GNN with noisy and limited labels