57 research outputs found
Sub-Band Knowledge Distillation Framework for Speech Enhancement
In single-channel speech enhancement, methods based on full-band spectral
features have been widely studied. However, only a few methods pay attention to
non-full-band spectral features. In this paper, we explore a knowledge
distillation framework based on sub-band spectral mapping for single-channel
speech enhancement. Specifically, we divide the full frequency band into
multiple sub-bands and pre-train an elite-level sub-band enhancement model
(teacher model) for each sub-band. These teacher models are dedicated to
processing their own sub-bands. Next, under the teacher models' guidance, we
train a general sub-band enhancement model (student model) that works for all
sub-bands. Without increasing the number of model parameters and computational
complexity, the student model's performance is further improved. To evaluate
our proposed method, we conducted a large number of experiments on an
open-source data set. The final experimental results show that the guidance
from the elite-level teacher models dramatically improves the student model's
performance, which exceeds the full-band model by employing fewer parameters.Comment: Published in Interspeech 202
Controllable Accented Text-to-Speech Synthesis
Accented text-to-speech (TTS) synthesis seeks to generate speech with an
accent (L2) as a variant of the standard version (L1). Accented TTS synthesis
is challenging as L2 is different from L1 in both in terms of phonetic
rendering and prosody pattern. Furthermore, there is no easy solution to the
control of the accent intensity in an utterance. In this work, we propose a
neural TTS architecture, that allows us to control the accent and its intensity
during inference. This is achieved through three novel mechanisms, 1) an accent
variance adaptor to model the complex accent variance with three prosody
controlling factors, namely pitch, energy and duration; 2) an accent intensity
modeling strategy to quantify the accent intensity; 3) a consistency constraint
module to encourage the TTS system to render the expected accent intensity at a
fine level. Experiments show that the proposed system attains superior
performance to the baseline models in terms of accent rendering and intensity
control. To our best knowledge, this is the first study of accented TTS
synthesis with explicit intensity control.Comment: To be submitted for possible journal publicatio
FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Conversational Text-to-Speech (TTS) aims to synthesis an utterance with the
right linguistic and affective prosody in a conversational context. The
correlation between the current utterance and the dialogue history at the
utterance level was used to improve the expressiveness of synthesized speech.
However, the fine-grained information in the dialogue history at the word level
also has an important impact on the prosodic expression of an utterance, which
has not been well studied in the prior work. Therefore, we propose a novel
expressive conversational TTS model, termed as FCTalker, that learn the fine
and coarse grained context dependency at the same time during speech
generation. Specifically, the FCTalker includes fine and coarse grained
encoders to exploit the word and utterance-level context dependency. To model
the word-level dependencies between an utterance and its dialogue history, the
fine-grained dialogue encoder is built on top of a dialogue BERT model. The
experimental results show that the proposed method outperforms all baselines
and generates more expressive speech that is contextually appropriate. We
release the source code at: https://github.com/walker-hyf/FCTalker.Comment: 5 pages, 4 figures, 1 table. Submitted to ICASSP 2023. We release the
source code at: https://github.com/walker-hyf/FCTalke
Accurate emotion strength assessment for seen and unseen speech based on data-driven deep learning
Emotion classification of speech and assessment of the emotion strength are required in applications such as emotional text-to-speech and voice conversion. The emotion attribute ranking function based on Support Vector Machine (SVM) was proposed to predict emotion strength for emotional speech corpus. However, the trained ranking function doesn't generalize to new domains, which limits the scope of applications, especially for out-of-domain or unseen speech. In this paper, we propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech. This is achieved by the fusion of emotional data from various domains. We follow a multi-task learning network architecture that includes an acoustic encoder, a strength predictor, and an auxiliary emotion predictor. Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech. We release the source codes at: https://github.com/ttslr/StrengthNet
Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Multimodal emotion recognition leverages complementary information across
modalities to gain performance. However, we cannot guarantee that the data of
all modalities are always present in practice. In the studies to predict the
missing data across modalities, the inherent difference between heterogeneous
modalities, namely the modality gap, presents a challenge. To address this, we
propose to use invariant features for a missing modality imagination network
(IF-MMIN) which includes two novel mechanisms: 1) an invariant feature learning
strategy that is based on the central moment discrepancy (CMD) distance under
the full-modality scenario; 2) an invariant feature based imagination module
(IF-IM) to alleviate the modality gap during the missing modalities prediction,
thus improving the robustness of multimodal joint representation. Comprehensive
experiments on the benchmark dataset IEMOCAP demonstrate that the proposed
model outperforms all baselines and invariantly improves the overall emotion
recognition performance under uncertain missing-modality conditions. We release
the code at: https://github.com/ZhuoYulang/IF-MMIN.Comment: 5 pages, 3 figures, 1 table. Submitted to ICASSP 2023. We release the
code at: https://github.com/ZhuoYulang/IF-MMI
Explicit Intensity Control for Accented Text-to-speech
Accented text-to-speech (TTS) synthesis seeks to generate speech with an
accent (L2) as a variant of the standard version (L1). How to control the
intensity of accent in the process of TTS is a very interesting research
direction, and has attracted more and more attention. Recent work design a
speaker-adversarial loss to disentangle the speaker and accent information, and
then adjust the loss weight to control the accent intensity. However, such a
control method lacks interpretability, and there is no direct correlation
between the controlling factor and natural accent intensity. To this end, this
paper propose a new intuitive and explicit accent intensity control scheme for
accented TTS. Specifically, we first extract the posterior probability, called
as ``goodness of pronunciation (GoP)'' from the L1 speech recognition model to
quantify the phoneme accent intensity for accented speech, then design a
FastSpeech2 based TTS model, named Ai-TTS, to take the accent intensity
expression into account during speech generation. Experiments show that the our
method outperforms the baseline model in terms of accent rendering and
intensity control.Comment: 5 pages, 3 figures. Submitted to ICASSP 2023. arXiv admin note: text
overlap with arXiv:2209.1080
- …