995 research outputs found
Joint Optimization of Quantization and Structured Sparsity for Compressed Deep Neural Networks
abstract: Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.
To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.
This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.Dissertation/ThesisMasters Thesis Computer Engineering 201
Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization
When approaching a novel visual recognition problem in a specialized image
domain, a common strategy is to start with a pre-trained deep neural network
and fine-tune it to the specialized domain. If the target domain covers a
smaller visual space than the source domain used for pre-training (e.g.
ImageNet), the fine-tuned network is likely to be over-parameterized. However,
applying network pruning as a post-processing step to reduce the memory
requirements has drawbacks: fine-tuning and pruning are performed
independently; pruning parameters are set once and cannot adapt over time; and
the highly parameterized nature of state-of-the-art pruning methods make it
prohibitive to manually search the pruning parameter space for deep networks,
leading to coarse approximations. We propose a principled method for jointly
fine-tuning and compressing a pre-trained convolutional network that overcomes
these limitations. Experiments on two specialized image domains (remote sensing
images and describable textures) demonstrate the validity of the proposed
approach.Comment: BMVC 2017 ora
Improved Bayesian Compression
Compression of Neural Networks (NN) has become a highly studied topic in
recent years. The main reason for this is the demand for industrial scale usage
of NNs such as deploying them on mobile devices, storing them efficiently,
transmitting them via band-limited channels and most importantly doing
inference at scale. In this work, we propose to join the Soft-Weight Sharing
and Variational Dropout approaches that show strong results to define a new
state-of-the-art in terms of model compression
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
- …