1 research outputs found
Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters
In this paper, we investigate the empirical impact of orthogonality
regularization (OR) in deep learning, either solo or collaboratively. Recent
works on OR showed some promising results on the accuracy. In our ablation
study, however, we do not observe such significant improvement from existing OR
techniques compared with the conventional training based on weight decay,
dropout, and batch normalization. To identify the real gain from OR, inspired
by the locality sensitive hashing (LSH) in angle estimation, we propose to
introduce an implicit self-regularization into OR to push the mean and variance
of filter angles in a network towards 90 and 0 simultaneously to achieve (near)
orthogonality among the filters, without using any other explicit
regularization. Our regularization can be implemented as an architectural
plug-in and integrated with an arbitrary network. We reveal that OR helps
stabilize the training process and leads to faster convergence and better
generalization.Comment: This version fixed the controversial expression in Section 2.