2,271 research outputs found
U-Style: Cascading U-nets with Multi-level Speaker and Style Modeling for Zero-Shot Voice Cloning
Zero-shot speaker cloning aims to synthesize speech for any target speaker
unseen during TTS system building, given only a single speech reference of the
speaker at hand. Although more practical in real applications, the current
zero-shot methods still produce speech with undesirable naturalness and speaker
similarity. Moreover, endowing the target speaker with arbitrary speaking
styles in the zero-shot setup has not been considered. This is because the
unique challenge of zero-shot speaker and style cloning is to learn the
disentangled speaker and style representations from only short references
representing an arbitrary speaker and an arbitrary style. To address this
challenge, we propose U-Style, which employs Grad-TTS as the backbone,
particularly cascading a speaker-specific encoder and a style-specific encoder
between the text encoder and the diffusion decoder. Thus, leveraging signal
perturbation, U-Style is explicitly decomposed into speaker- and style-specific
modeling parts, achieving better speaker and style disentanglement. To improve
unseen speaker and style modeling ability, these two encoders conduct
multi-level speaker and style modeling by skip-connected U-nets, incorporating
the representation extraction and information reconstruction process. Besides,
to improve the naturalness of synthetic speech, we adopt mean-based instance
normalization and style adaptive layer normalization in these encoders to
perform representation extraction and condition adaptation, respectively.
Experiments show that U-Style significantly surpasses the state-of-the-art
methods in unseen speaker cloning regarding naturalness and speaker similarity.
Notably, U-Style can transfer the style from an unseen source speaker to
another unseen target speaker, achieving flexible combinations of desired
speaker timbre and style in zero-shot voice cloning
Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features
Voice conversion for highly expressive speech is challenging. Current
approaches struggle with the balancing between speaker similarity,
intelligibility and expressiveness. To address this problem, we propose
Expressive-VC, a novel end-to-end voice conversion framework that leverages
advantages from both neural bottleneck feature (BNF) approach and information
perturbation approach. Specifically, we use a BNF encoder and a Perturbed-Wav
encoder to form a content extractor to learn linguistic and para-linguistic
features respectively, where BNFs come from a robust pre-trained ASR model and
the perturbed wave becomes speaker-irrelevant after signal perturbation. We
further fuse the linguistic and para-linguistic features through an attention
mechanism, where speaker-dependent prosody features are adopted as the
attention query, which result from a prosody encoder with target speaker
embedding and normalized pitch and energy of source speech as input. Finally
the decoder consumes the integrated features and the speaker-dependent prosody
feature to generate the converted speech. Experiments demonstrate that
Expressive-VC is superior to several state-of-the-art systems, achieving both
high expressiveness captured from the source speech and high speaker similarity
with the target speaker; meanwhile intelligibility is well maintained
Improving the Speech Intelligibility By Cochlear Implant Users
In this thesis, we focus on improving the intelligibility of speech for cochlear implants (CI) users. As an auditory prosthetic device, CI can restore hearing sensations for most patients with profound hearing loss in both ears in a quiet background. However, CI users still have serious problems in understanding speech in noisy and reverberant environments. Also, bandwidth limitation, missing temporal fine structures, and reduced spectral resolution due to a limited number of electrodes are other factors that raise the difficulty of hearing in noisy conditions for CI users, regardless of the type of noise. To mitigate these difficulties for CI listener, we investigate several contributing factors such as the effects of low harmonics on tone identification in natural and vocoded speech, the contribution of matched envelope dynamic range to the binaural benefits and contribution of low-frequency harmonics to tone identification in quiet and six-talker babble background. These results revealed several promising methods for improving speech intelligibility for CI patients. In addition, we investigate the benefits of voice conversion in improving speech intelligibility for CI users, which was motivated by an earlier study showing that familiarity with a talker’s voice can improve understanding of the conversation. Research has shown that when adults are familiar with someone’s voice, they can more accurately – and even more quickly – process and understand what the person is saying. This theory identified as the “familiar talker advantage” was our motivation to examine its effect on CI patients using voice conversion technique. In the present research, we propose a new method based on multi-channel voice conversion to improve the intelligibility of transformed speeches for CI patients
Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech
Polyphone disambiguation aims to capture accurate pronunciation knowledge
from natural text sequences for reliable Text-to-speech (TTS) systems. However,
previous approaches require substantial annotated training data and additional
efforts from language experts, making it difficult to extend high-quality
neural TTS systems to out-of-domain daily conversations and countless languages
worldwide. This paper tackles the polyphone disambiguation problem from a
concise and novel perspective: we propose Dict-TTS, a semantic-aware generative
text-to-speech model with an online website dictionary (the existing prior
information in the natural language). Specifically, we design a
semantics-to-pronunciation attention (S2PA) module to match the semantic
patterns between the input text sequence and the prior semantics in the
dictionary and obtain the corresponding pronunciations; The S2PA module can be
easily trained with the end-to-end TTS model without any annotated phoneme
labels. Experimental results in three languages show that our model outperforms
several strong baseline models in terms of pronunciation accuracy and improves
the prosody modeling of TTS systems. Further extensive analyses demonstrate
that each design in Dict-TTS is effective. The code is available at
\url{https://github.com/Zain-Jiang/Dict-TTS}.Comment: Accepted by NeurIPS 202
Analysis on Using Synthesized Singing Techniques in Assistive Interfaces for Visually Impaired to Study Music
Tactile and auditory senses are the basic types of methods that visually impaired people sense the world. Their interaction with assistive technologies also focuses mainly on tactile and auditory interfaces. This research paper discuss about the validity of using most appropriate singing synthesizing techniques as a mediator in assistive technologies specifically built to address their music learning needs engaged with music scores and lyrics. Music scores with notations and lyrics are considered as the main mediators in musical communication channel which lies between a composer and a performer. Visually impaired music lovers have less opportunity to access this main mediator since most of them are in visual format. If we consider a music score, the vocal performer’s melody is married to all the pleasant sound producible in the form of singing. Singing best fits for a format in temporal domain compared to a tactile format in spatial domain. Therefore, conversion of existing visual format to a singing output will be the most appropriate nonlossy transition as proved by the initial research on adaptive music score trainer for visually impaired [1]. In order to extend the paths of this initial research, this study seek on existing singing synthesizing techniques and researches on auditory interfaces
Automatic Pronunciation Assessment -- A Review
Pronunciation assessment and its application in computer-aided pronunciation
training (CAPT) have seen impressive progress in recent years. With the rapid
growth in language processing and deep learning over the past few years, there
is a need for an updated review. In this paper, we review methods employed in
pronunciation assessment for both phonemic and prosodic. We categorize the main
challenges observed in prominent research trends, and highlight existing
limitations, and available resources. This is followed by a discussion of the
remaining challenges and possible directions for future work.Comment: 9 pages, accepted to EMNLP Finding
FastGraphTTS: An Ultrafast Syntax-Aware Speech Synthesis Framework
This paper integrates graph-to-sequence into an end-to-end text-to-speech
framework for syntax-aware modelling with syntactic information of input text.
Specifically, the input text is parsed by a dependency parsing module to form a
syntactic graph. The syntactic graph is then encoded by a graph encoder to
extract the syntactic hidden information, which is concatenated with phoneme
embedding and input to the alignment and flow-based decoding modules to
generate the raw audio waveform. The model is experimented on two languages,
English and Mandarin, using single-speaker, few samples of target speakers, and
multi-speaker datasets, respectively. Experimental results show better prosodic
consistency performance between input text and generated audio, and also get
higher scores in the subjective prosodic evaluation, and show the ability of
voice conversion. Besides, the efficiency of the model is largely boosted
through the design of the AI chip operator with 5x acceleration.Comment: Accepted by The 35th IEEE International Conference on Tools with
Artificial Intelligence. (ICTAI 2023
- …