Article thumbnail

Dynamic Group Convolution for Accelerating Convolutional Neural Networks

By Zhuo Su, Linpu Fang, Wenxiong Kang, Dewen Hu, Matti Pietikäinen and Li Liu

Abstract

Replacing normal convolutions with group convolutions can significantly increase the computational efficiency of modern deep convolutional networks, which has been widely adopted in compact network architecture designs. However, existing group convolutions undermine the original network structures by cutting off some connections permanently resulting in significant accuracy degradation. In this paper, we propose dynamic group convolution (DGC) that adaptively selects which part of input channels to be connected within each group for individual samples on the fly. Specifically, we equip each group with a small feature selector to automatically select the most important input channels conditioned on the input images. Multiple groups can adaptively capture abundant and complementary visual/semantic features for each input image. The DGC preserves the original network structure and has similar computational efficiency as the conventional group convolution simultaneously. Extensive experiments on multiple image classification benchmarks including CIFAR-10, CIFAR-100 and ImageNet demonstrate its superiority over the existing group convolution techniques and dynamic execution methods. The code is available at https://github.com/zhuogege1943/dgc.Comment: 21 pages, 10 figure

Topics: Computer Science - Computer Vision and Pattern Recognition
Year: 2020
OAI identifier: oai:arXiv.org:2007.04242

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.