We propose a novel neural waveform compression method to catalyze emerging
speech semantic communications. By introducing nonlinear transform and
variational modeling, we effectively capture the dependencies within speech
frames and estimate the probabilistic distribution of the speech feature more
accurately, giving rise to better compression performance. In particular, the
speech signals are analyzed and synthesized by a pair of nonlinear transforms,
yielding latent features. An entropy model with hyperprior is built to capture
the probabilistic distribution of latent features, followed with quantization
and entropy coding. The proposed waveform codec can be optimized flexibly
towards arbitrary rate, and the other appealing feature is that it can be
easily optimized for any differentiable loss function, including perceptual
loss used in semantic communications. To further improve the fidelity, we
incorporate residual coding to mitigate the degradation arising from
quantization distortion at the latent space. Results indicate that achieving
the same performance, the proposed method saves up to 27% coding rate than
widely used adaptive multi-rate wideband (AMR-WB) codec as well as emerging
neural waveform coding methods