We proposed a novel approach in the field of time-scale modification on audio
signals. While traditional methods use the framing technique, spectral approach
uses the short-time Fourier transform to preserve the frequency during temporal
stretching. TSM-Net, our neural-network model encodes the raw audio into a
high-level latent representation. We call it Neuralgram, in which one vector
represents 1024 audio samples. It is inspired by the framing technique but
addresses the clipping artifacts. The Neuralgram is a two-dimensional matrix
with real values, we can apply some existing image resizing techniques on the
Neuralgram and decode it using our neural decoder to obtain the time-scaled
audio. Both the encoder and decoder are trained with GANs, which shows fair
generalization ability on the scaled Neuralgrams. Our method yields little
artifacts and opens a new possibility in the research of modern time-scale
modification. The audio samples can be found on
https://ernestchu.github.io/tsm-net-demo