84 research outputs found
On the Generalization Effects of Linear Transformations in Data Augmentation
Data augmentation is a powerful technique to improve performance in
applications such as image and text classification tasks. Yet, there is little
rigorous understanding of why and how various augmentations work. In this work,
we consider a family of linear transformations and study their effects on the
ridge estimator in an over-parametrized linear regression setting. First, we
show that transformations which preserve the labels of the data can improve
estimation by enlarging the span of the training data. Second, we show that
transformations which mix data can improve estimation by playing a
regularization effect. Finally, we validate our theoretical insights on MNIST.
Based on the insights, we propose an augmentation scheme that searches over the
space of transformations by how uncertain the model is about the transformed
data. We validate our proposed scheme on image and text datasets. For example,
our method outperforms RandAugment by 1.24% on CIFAR-100 using
Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA
Adversarial AutoAugment on CIFAR datasets.Comment: International Conference on Machine learning (ICML) 2020. Added
experimental results on ImageNe
Convolutional Neural Networks with Dynamic Regularization
Regularization is commonly used for alleviating overfitting in machine
learning. For convolutional neural networks (CNNs), regularization methods,
such as DropBlock and Shake-Shake, have illustrated the improvement in the
generalization performance. However, these methods lack a self-adaptive ability
throughout training. That is, the regularization strength is fixed to a
predefined schedule, and manual adjustments are required to adapt to various
network architectures. In this paper, we propose a dynamic regularization
method for CNNs. Specifically, we model the regularization strength as a
function of the training loss. According to the change of the training loss,
our method can dynamically adjust the regularization strength in the training
procedure, thereby balancing the underfitting and overfitting of CNNs. With
dynamic regularization, a large-scale model is automatically regularized by the
strong perturbation, and vice versa. Experimental results show that the
proposed method can improve the generalization capability on off-the-shelf
network architectures and outperform state-of-the-art regularization methods.Comment: 7 pages. Accepted for Publication at IEEE TNNL
- …