71 research outputs found
Neural Fourier Shift for Binaural Speech Rendering
We present a neural network for rendering binaural speech from given monaural
audio, position, and orientation of the source. Most of the previous works have
focused on synthesizing binaural speeches by conditioning the positions and
orientations in the feature space of convolutional neural networks. These
synthesis approaches are powerful in estimating the target binaural speeches
even for in-the-wild data but are difficult to generalize for rendering the
audio from out-of-distribution domains. To alleviate this, we propose Neural
Fourier Shift (NFS), a novel network architecture that enables binaural speech
rendering in the Fourier space. Specifically, utilizing a geometric time delay
based on the distance between the source and the receiver, NFS is trained to
predict the delays and scales of various early reflections. NFS is efficient in
both memory and computational cost, is interpretable, and operates
independently of the source domain by its design. With up to 25 times lighter
memory and 6 times fewer calculations, the experimental results show that NFS
outperforms the previous studies on the benchmark dataset.Comment: Submitted to ICASSP 202
A Unified System for Chord Transcription and Key Extraction Using Hidden Markov Models.
[TODO] Add abstract here
Semi-supervised learning for continuous emotional intensity controllable speech synthesis with disentangled representations
Recent text-to-speech models have reached the level of generating natural
speech similar to what humans say. But there still have limitations in terms of
expressiveness. The existing emotional speech synthesis models have shown
controllability using interpolated features with scaling parameters in
emotional latent space. However, the emotional latent space generated from the
existing models is difficult to control the continuous emotional intensity
because of the entanglement of features like emotions, speakers, etc. In this
paper, we propose a novel method to control the continuous intensity of
emotions using semi-supervised learning. The model learns emotions of
intermediate intensity using pseudo-labels generated from phoneme-level
sequences of speech information. An embedding space built from the proposed
model satisfies the uniform grid geometry with an emotional basis. The
experimental results showed that the proposed method was superior in
controllability and naturalness.Comment: Accepted by Interspeech 202
- …