1 research outputs found
Weight Pruning via Adaptive Sparsity Loss
Pruning neural networks has regained interest in recent years as a means to
compress state-of-the-art deep neural networks and enable their deployment on
resource-constrained devices. In this paper, we propose a robust compressive
learning framework that efficiently prunes network parameters during training
with minimal computational overhead. We incorporate fast mechanisms to prune
individual layers and build upon these to automatically prune the entire
network under a user-defined budget constraint. Key to our end-to-end network
pruning approach is the formulation of an intuitive and easy-to-implement
adaptive sparsity loss that is used to explicitly control sparsity during
training, enabling efficient budget-aware optimization. Extensive experiments
demonstrate the effectiveness of the proposed framework for image
classification on the CIFAR and ImageNet datasets using different
architectures, including AlexNet, ResNets and Wide ResNets