84 research outputs found
Mage - Reactive articulatory feature control of HMM-based parametric speech synthesis
In this paper, we present the integration of articulatory control into MAGE, a framework for realtime and interactive (reactive) parametric speech synthesis using hidden Markov models (HMMs). MAGE is based on the speech synthesis engine from HTS and uses acoustic features (spectrum and f0) to model and synthesize speech. In this work, we replace the standard acoustic models with models combining acoustic and articulatory features, such as tongue, lips and jaw positions. We then use feature-space-switched articulatory-to-acoustic regression matrices to enable us to control the spectral acoustic features by manipulating the articulatory features. Combining this synthesis model with MAGE allows us to interactively and intuitively modify phones synthesized in real time, for example transforming one phone into another, by controlling the configuration of the articulators in a visual display. Index Terms: speech synthesis, reactive, articulators 1
End-To-End Speech Synthesis Applied to Brazilian Portuguese
Voice synthesis systems are popular in different applications, such as
personal assistants, GPS applications, screen readers and accessibility
tools.Voice provides a natural way for human-computer interaction. However, not
all languages are on the same level when in terms of resources and systems for
voice synthesis. This work consists of the creation of publicly available
resources for Brazilian Portuguese in the form of a dataset and deep learning
models for end-to-end voice synthesis. The dataset has 10.5 hours from a single
speaker. We investigated three different architectures to perform end-to-end
speech synthesis: Tacotron 1, DCTTS and Mozilla TTS. We also analysed the
performance of models according to different vocoders (RTISI-LA, WaveRNN and
Universal WaveRNN), phonetic transcriptions usage, transfer learning (from
English) and denoising. In the proposed scenario, a model based on Mozilla TTS
and RTISI-LA vocoder presented the best performance, achieving a 4.03 MOS
value. We also verified that transfer learning, phonetic transcriptions and
denoising are useful to train the models over the presented dataset. The
obtained results are comparable to related works covering English, even while
using a smaller datasetComment: This paper is under consideration at COLING'2020 - The 28th
International Conference on Computational Linguistic
The Romanian Speech Synthesis (RSS) corpus: building a high quality HMM-based speech synthesis system using a high sampling rate
This paper first introduces a newly-recorded high quality Romanian speech corpus designed for speech synthesis, called âRSSâ, along
with Romanian front-end text processing modules and HMM-based synthetic voices built from the corpus. All of these are now freely
available for academic use in order to promote Romanian speech technology research. The RSS corpus comprises 3500 training sentences and 500 test sentences uttered by a female speaker and was recorded using multiple microphones at 96 kHz sampling frequency in a hemianechoic chamber. The details of the new Romanian text processor we have developed are also given.
Using the database, we then revisit some basic configuration choices of speech synthesis, such as waveform sampling frequency and auditory frequency warping scale, with the aim of improving speaker similarity, which is an acknowledged weakness of current HMM-based speech synthesisers. As we demonstrate using perceptual tests, these configuration choices can make substantial differences to the quality of the synthetic speech. Contrary to common practice in automatic speech recognition, higher waveform sampling frequencies can offer enhanced feature extraction and improved speaker similarity for HMM-based speech synthesis
Building and Designing Expressive Speech Synthesis
We know there is something special about speech. Our voices are not just a means of communicating. They also give a deep impression of who we are and what we might know. They can betray our upbringing, our emotional state, our state of health. They can be used to persuade and convince, to calm and to excite. As speech systems enter the social domain they are required to interact, support and mediate our social relationships with 1) each other, 2) with digital information, and, increasingly, 3) with AI-based algorithms and processes. Socially Interactive Agents (SIAs) are at the fore- front of research and innovation in this area. There is an assumption that in the future âspoken language will provide a natural conversational interface between human beings and so-called intelligent systems.â [Moore 2017, p. 283]. A considerable amount of previous research work has tested this assumption with mixed results. However, as pointed out âvoice interfaces have become notorious for fostering frustration and failureâ [Nass and Brave 2005, p.6]. It is within this context, between our exceptional and intelligent human use of speech to communicate and interact with other humans, and our desire to leverage this means of communication for artificial systems, that the technology, often termed expressive speech synthesis uncomfortably falls. Uncomfortably, because it is often overshadowed by issues in interactivity and the underlying intelligence of the system which is something that emerges from the interaction of many of the components in a SIA. This is especially true of what we might term conversational speech, where decoupling how things are spoken, from when and to whom they are spoken, can seem an impossible task. This is an even greater challenge in evaluation and in characterising full systems which have made use of expressive speech. Furthermore when designing an interaction with a SIA, we must not only consider how SIAs should speak but how much, and whether they should even speak at all. These considerations cannot be ignored. Any speech synthesis that is used in the context of an artificial agent will have a perceived accent, a vocal style, an underlying emotion and an intonational model. Dimensions like accent and personality (cross speaker parameters) as well as vocal style, emotion and intonation during an interaction (within-speaker parameters) need to be built in the design of a synthetic voice. Even a default or neutral voice has to consider these same expressive speech synthesis components. Such design parameters have a strong influence on how effectively a system will interact, how it is perceived and its assumed ability to perform a task or function. To ignore these is to blindly accept a set of design decisions that ignores the complex effect speech has on the userâs successful interaction with a system. Thus expressive speech synthesis is a key design component in SIAs. This chapter explores the world of expressive speech synthesis, aiming to act as a starting point for those interested in the design, building and evaluation of such artificial speech. The debates and literature within this topic are vast and are fundamentally multidisciplinary in focus, covering a wide range of disciplines such as linguistics, pragmatics, psychology, speech and language technology, robotics and human-computer interaction (HCI), to name a few. It is not our aim to synthesise these areas but to give a scaffold and a starting point for the reader by exploring the critical dimensions and decisions they may need to consider when choosing to use expressive speech. To do this, the chapter explores the building of expressive synthesis, highlighting key decisions and parameters as well as emphasising future challenges in expressive speech research and development. Yet, before these are expanded upon we must first try and define what we actually mean by expressive speech
HMM-based synthesis of child speech
The synthesis of child speech presents challenges both in the collection of data and in the building of a synthesiser from that data. Because only limited data can be collected, and the domain of that data is constrained, it is difficult to obtain the type of phonetically-balanced corpus usually used in speech synthesis. As a consequence, building a synthesiser from this data is difficult. Concatenative synthesisers are not robust to corpora with many missing units (as is likely when the corpus content is not carefully designed), so we chose to build a statistical parametric synthesiser using the HMM-based system HTS. This technique has previously been shown to perform well for limited amounts of data, and for data collected under imperfect conditions. We compared 6 different configurations of the synthesiser, using both speaker-dependent and speaker-adaptive modelling techniques, and using varying amounts of data. The output from these systems was evaluated alongside natural and vocoded speech, in a Blizzard-style listening test
Vivos Voco: A survey of recent research on voice transformation at IRCAM
cote interne IRCAM: Lanchantin11cInternational audienceIRCAM has a long experience in analysis, synthesis and transformation of voice. Natural voice transformations are of great interest for many applications and can be combine with text-to-speech system, leading to a powerful creation tool. We present research conducted at IRCAM on voice transformations for the last few years. Transformations can be achieved in a global way by modifying pitch, spectral envelope, durations etc. While it sacrifices the possibility to attain a specific target voice, the approach allows the production of new voices of a high degree of naturalness with different gender and age, modified vocal quality, or another speech style. These transformations can be applied in realtime using ircamTools TRAX. Transformation can also be done in a more specific way in order to transform a voice towards the voice of a target speaker. Finally, we present some recent research on the transformation of expressivity
Prosody generation for text-to-speech synthesis
The absence of convincing intonation makes current parametric speech
synthesis systems sound dull and lifeless, even when trained on expressive
speech data. Typically, these systems use regression techniques to predict the
fundamental frequency (F0) frame-by-frame. This approach leads to overlysmooth
pitch contours and fails to construct an appropriate prosodic structure
across the full utterance. In order to capture and reproduce larger-scale
pitch patterns, we propose a template-based approach for automatic F0 generation,
where per-syllable pitch-contour templates (from a small, automatically
learned set) are predicted by a recurrent neural network (RNN). The use of
syllable templates mitigates the over-smoothing problem and is able to reproduce
pitch patterns observed in the data. The use of an RNN, paired with connectionist
temporal classification (CTC), enables the prediction of structure in
the pitch contour spanning the entire utterance. This novel F0 prediction system
is used alongside separate LSTMs for predicting phone durations and the
other acoustic features, to construct a complete text-to-speech system. Later,
we investigate the benefits of including long-range dependencies in duration
prediction at frame-level using uni-directional recurrent neural networks.
Since prosody is a supra-segmental property, we consider an alternate approach
to intonation generation which exploits long-term dependencies of
F0 by effective modelling of linguistic features using recurrent neural networks.
For this purpose, we propose a hierarchical encoder-decoder and
multi-resolution parallel encoder where the encoder takes word and higher
level linguistic features at the input and upsamples them to phone-level
through a series of hidden layers and is integrated into a Hybrid system which
is then submitted to Blizzard challenge workshop. We then highlight some of
the issues in current approaches and a plan for future directions of investigation
is outlined along with on-going work
Text-Independent Voice Conversion
This thesis deals with text-independent solutions for voice conversion. It first introduces the use of vocal tract length normalization (VTLN) for voice conversion. The presented variants of VTLN allow for easily changing speaker characteristics by means of a few trainable parameters. Furthermore, it is shown how VTLN can be expressed in time domain strongly reducing the computational costs while keeping a high speech quality. The second text-independent voice conversion paradigm is residual prediction. In particular, two proposed techniques, residual smoothing and the application of unit selection, result in essential improvement of both speech quality and voice similarity. In order to apply the well-studied linear transformation paradigm to text-independent voice conversion, two text-independent speech alignment techniques are introduced. One is based on automatic segmentation and mapping of artificial phonetic classes and the other is a completely data-driven approach with unit selection. The latter achieves a performance very similar to the conventional text-dependent approach in terms of speech quality and similarity. It is also successfully applied to cross-language voice conversion. The investigations of this thesis are based on several corpora of three different languages, i.e., English, Spanish, and German. Results are also presented from the multilingual voice conversion evaluation in the framework of the international speech-to-speech translation project TC-Star
Overcoming the limitations of statistical parametric speech synthesis
At the time of beginning this thesis, statistical parametric speech synthesis (SPSS)
using hidden Markov models (HMMs) was the dominant synthesis paradigm within the
research community. SPSS systems are effective at generalising across the linguistic
contexts present in training data to account for inevitable unseen linguistic contexts at
synthesis-time, making these systems flexible and their performance stable. However
HMM synthesis suffers from a âceiling effectâ in the naturalness achieved, meaning
that, despite great progress, the speech output is rarely confused for natural speech.
There are many hypotheses for the causes of reduced synthesis quality, and subsequent
required improvements, for HMM speech synthesis in literature. However, until this
thesis, these hypothesised causes were rarely tested.
This thesis makes two types of contributions to the field of speech synthesis; each
of these appears in a separate part of the thesis. Part I introduces a methodology for
testing hypothesised causes of limited quality within HMM speech synthesis systems.
This investigation aims to identify what causes these systems to fall short of natural
speech. Part II uses the findings from Part I of the thesis to make informed improvements
to speech synthesis.
The usual approach taken to improve synthesis systems is to attribute reduced synthesis
quality to a hypothesised cause. A new system is then constructed with the aim
of removing that hypothesised cause. However this is typically done without prior testing
to verify the hypothesised cause of reduced quality. As such, even if improvements
in synthesis quality are observed, there is no knowledge of whether a real underlying
issue has been fixed or if a more minor issue has been fixed. In contrast, I perform a
wide range of perceptual tests in Part I of the thesis to discover what the real underlying
causes of reduced quality in HMM synthesis are and the level to which they contribute.
Using the knowledge gained in Part I of the thesis, Part II then looks to make improvements
to synthesis quality. Two well-motivated improvements to standard HMM
synthesis are investigated. The first of these improvements follows on from averaging
across differing linguistic contexts being identified as a major contributing factor to
reduced synthesis quality. This is a practice typically performed during decision tree
regression in HMM synthesis. Therefore a system which removes averaging across
differing linguistic contexts and instead performs averaging only across matching linguistic
contexts (called rich-context synthesis) is investigated. The second of the motivated
improvements follows the finding that the parametrisation (i.e., vocoding) of
speech, standard practice in SPSS, introduces a noticeable drop in quality before any
modelling is even performed. Therefore the hybrid synthesis paradigm is investigated.
These systems aim to remove the effect of vocoding by using SPSS to inform the selection
of units in a unit selection system. Both of the motivated improvements applied
in Part II are found to make significant gains in synthesis quality, demonstrating the
benefit of performing the style of perceptual testing conducted in the thesis
- âŚ