One difficult problem of keyword spotting is how to miniaturize its memory
footprint while maintain a high precision. Although convolutional neural
networks have shown to be effective to the small-footprint keyword spotting
problem, they still need hundreds of thousands of parameters to achieve good
performance. In this paper, we propose an efficient model based on depthwise
separable convolution layers and squeeze-and-excitation blocks. Specifically,
we replace the standard convolution by the depthwise separable convolution,
which reduces the number of the parameters of the standard convolution without
significant performance degradation. We further improve the performance of the
depthwise separable convolution by reweighting the output feature maps of the
first convolution layer with a so-called squeeze-and-excitation block. We
compared the proposed method with five representative models on two
experimental settings of the Google Speech Commands dataset. Experimental
results show that the proposed method achieves the state-of-the-art
performance. For example, it achieves a classification error rate of 3.29% with
a number of parameters of 72K in the first experiment, which significantly
outperforms the comparison methods given a similar model size. It achieves an
error rate of 3.97% with a number of parameters of 10K, which is also slightly
better than the state-of-the-art comparison method given a similar model size