This paper aims to apply a new deep learning approach to the task of
generating raw audio files. It is based on diffusion models, a recent type of
deep generative model. This new type of method has recently shown outstanding
results with image generation. A lot of focus has been given to those models by
the computer vision community. On the other hand, really few have been given
for other types of applications such as music generation in waveform domain.
In this paper the model for unconditional generating applied to music is
implemented: Progressive distillation diffusion with 1D U-Net. Then, a
comparison of different parameters of diffusion and their value in a full
result is presented. One big advantage of the methods implemented through this
work is the fact that the model is able to deal with progressing audio
processing and generating , using transformation from 1-channel 128 x 384 to
3-channel 128 x 128 mel-spectrograms and looped generation. The empirical
comparisons are realized across different self-collected datasets.Comment: 9 page