The performance of music source separation (MSS) models has been greatly
improved in recent years thanks to the development of novel neural network
architectures and training pipelines. However, recent model designs for MSS
were mainly motivated by other audio processing tasks or other research fields,
while the intrinsic characteristics and patterns of the music signals were not
fully discovered. In this paper, we propose band-split RNN (BSRNN), a
frequency-domain model that explictly splits the spectrogram of the mixture
into subbands and perform interleaved band-level and sequence-level modeling.
The choices of the bandwidths of the subbands can be determined by a priori
knowledge or expert knowledge on the characteristics of the target source in
order to optimize the performance on a certain type of target musical
instrument. To better make use of unlabeled data, we also describe a
semi-supervised model finetuning pipeline that can further improve the
performance of the model. Experiment results show that BSRNN trained only on
MUSDB18-HQ dataset significantly outperforms several top-ranking models in
Music Demixing (MDX) Challenge 2021, and the semi-supervised finetuning stage
further improves the performance on all four instrument tracks