2 research outputs found
Convolutional Neural Networks with Dynamic Regularization
Regularization is commonly used for alleviating overfitting in machine
learning. For convolutional neural networks (CNNs), regularization methods,
such as DropBlock and Shake-Shake, have illustrated the improvement in the
generalization performance. However, these methods lack a self-adaptive ability
throughout training. That is, the regularization strength is fixed to a
predefined schedule, and manual adjustments are required to adapt to various
network architectures. In this paper, we propose a dynamic regularization
method for CNNs. Specifically, we model the regularization strength as a
function of the training loss. According to the change of the training loss,
our method can dynamically adjust the regularization strength in the training
procedure, thereby balancing the underfitting and overfitting of CNNs. With
dynamic regularization, a large-scale model is automatically regularized by the
strong perturbation, and vice versa. Experimental results show that the
proposed method can improve the generalization capability on off-the-shelf
network architectures and outperform state-of-the-art regularization methods.Comment: 7 pages. Accepted for Publication at IEEE TNNL