8 research outputs found
Few Shot Network Compression via Cross Distillation
Model compression has been widely adopted to obtain light-weighted deep
neural networks. Most prevalent methods, however, require fine-tuning with
sufficient training data to ensure accuracy, which could be challenged by
privacy and security issues. As a compromise between privacy and performance,
in this paper we investigate few shot network compression: given few samples
per class, how can we effectively compress the network with negligible
performance drop? The core challenge of few shot network compression lies in
high estimation errors from the original network during inference, since the
compressed network can easily over-fits on the few training instances. The
estimation errors could propagate and accumulate layer-wisely and finally
deteriorate the network output. To address the problem, we propose cross
distillation, a novel layer-wise knowledge distillation approach. By
interweaving hidden layers of teacher and student network, layer-wisely
accumulated estimation errors can be effectively reduced.The proposed method
offers a general framework compatible with prevalent network compression
techniques such as pruning. Extensive experiments on benchmark datasets
demonstrate that cross distillation can significantly improve the student
network's accuracy when only a few training instances are available.Comment: AAAI 202
Dual Discriminator Adversarial Distillation for Data-free Model Compression
Knowledge distillation has been widely used to produce portable and efficient
neural networks which can be well applied on edge devices for computer vision
tasks. However, almost all top-performing knowledge distillation methods need
to access the original training data, which usually has a huge size and is
often unavailable. To tackle this problem, we propose a novel data-free
approach in this paper, named Dual Discriminator Adversarial Distillation
(DDAD) to distill a neural network without any training data or meta-data. To
be specific, we use a generator to create samples through dual discriminator
adversarial distillation, which mimics the original training data. The
generator not only uses the pre-trained teacher's intrinsic statistics in
existing batch normalization layers but also obtains the maximum discrepancy
from the student model. Then the generated samples are used to train the
compact student network under the supervision of the teacher. The proposed
method obtains an efficient student network which closely approximates its
teacher network, despite using no original training data. Extensive experiments
are conducted to to demonstrate the effectiveness of the proposed approach on
CIFAR-10, CIFAR-100 and Caltech101 datasets for classification tasks. Moreover,
we extend our method to semantic segmentation tasks on several public datasets
such as CamVid and NYUv2. All experiments show that our method outperforms all
baselines for data-free knowledge distillation
Multi-teacher knowledge distillation as an effective method for compressing ensembles of neural networks
Deep learning has contributed greatly to many successes in artificial
intelligence in recent years. Today, it is possible to train models that have
thousands of layers and hundreds of billions of parameters. Large-scale deep
models have achieved great success, but the enormous computational complexity
and gigantic storage requirements make it extremely difficult to implement them
in real-time applications. On the other hand, the size of the dataset is still
a real problem in many domains. Data are often missing, too expensive, or
impossible to obtain for other reasons. Ensemble learning is partially a
solution to the problem of small datasets and overfitting. However, ensemble
learning in its basic version is associated with a linear increase in
computational complexity. We analyzed the impact of the ensemble
decision-fusion mechanism and checked various methods of sharing the decisions
including voting algorithms. We used the modified knowledge distillation
framework as a decision-fusion mechanism which allows in addition compressing
of the entire ensemble model into a weight space of a single model. We showed
that knowledge distillation can aggregate knowledge from multiple teachers in
only one student model and, with the same computational complexity, obtain a
better-performing model compared to a model trained in the standard manner. We
have developed our own method for mimicking the responses of all teachers at
the same time, simultaneously. We tested these solutions on several benchmark
datasets. In the end, we presented a wide application use of the efficient
multi-teacher knowledge distillation framework. In the first example, we used
knowledge distillation to develop models that could automate corrosion
detection on aircraft fuselage. The second example describes detection of smoke
on observation cameras in order to counteract wildfires in forests.Comment: Doctoral dissertation in the field of computer science, machine
learning. Application of knowledge distillation as aggregation of ensemble
models. Along with several uses. 140 pages, 67 figures, 13 table