Deep neural network based full-band speech enhancement systems face
challenges of high demand of computational resources and imbalanced frequency
distribution. In this paper, a light-weight full-band model is proposed with
two dedicated strategies, i.e., a learnable spectral compression mapping for
more effective high-band spectral information compression, and the utilization
of the multi-head attention mechanism for more effective modeling of the global
spectral pattern. Experiments validate the efficacy of the proposed strategies
and show that the proposed model achieves competitive performance with only
0.89M parameters