255 research outputs found
Robustness and Generalization
We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
A Quasi-Wasserstein Loss for Learning Graph Neural Networks
When learning graph neural networks (GNNs) in node-level prediction tasks,
most existing loss functions are applied for each node independently, even if
node embeddings and their labels are non-i.i.d. because of their graph
structures. To eliminate such inconsistency, in this study we propose a novel
Quasi-Wasserstein (QW) loss with the help of the optimal transport defined on
graphs, leading to new learning and prediction paradigms of GNNs. In
particular, we design a "Quasi-Wasserstein" distance between the observed
multi-dimensional node labels and their estimations, optimizing the label
transport defined on graph edges. The estimations are parameterized by a GNN in
which the optimal label transport may determine the graph edge weights
optionally. By reformulating the strict constraint of the label transport to a
Bregman divergence-based regularizer, we obtain the proposed Quasi-Wasserstein
loss associated with two efficient solvers learning the GNN together with
optimal label transport. When predicting node labels, our model combines the
output of the GNN with the residual component provided by the optimal label
transport, leading to a new transductive prediction paradigm. Experiments show
that the proposed QW loss applies to various GNNs and helps to improve their
performance in node-level classification and regression tasks
- …