393 research outputs found

    Data-driven Extraction of Intonation Contour Classes

    Get PDF
    In this paper we introduce the first steps towards a new datadriven method for extraction of intonation events that does not require any prerequisite prosodic labelling. Provided with data segmented on the syllable constituent level it derives local and global contour classes by stylisation and subsequent clustering of the stylisation parameter vectors. Local contour classes correspond to pitch movements connected to one or several syllables and determine the local f0 shape. Global classes are connected to intonation phrases and determine the f0 register. Local classes initially are derived for syllabic segments, which are then concatenated incrementally by means of statistical language modelling of co-occurrence patterns. Due to its generality the method is in principal language independent and potentially capable to deal also with other aspects of prosody than intonation. 1

    Evaluation of a system for F0 contour prediction for european portuguese

    Get PDF
    This paper presents the evaluation of a system for speech F0 contour prediction for European Portuguese using the Fujisaki model. It is composed of two command-generating sub-systems, the phrase command sub-system and the accent command sub-system. The parameters for evaluating the ability of each sub-system are described. A comparison is made between original and predicted F0 contours. Finally, the results of a perceptual test are discussed

    Correlation between phonetic factors and linguistic events regarding a prosodic pattern of European Portuguese: a practical proposal

    Get PDF
    In this article a prosodic model for European Portuguese (henceforth EP) based on a linguistic approach is described. It was developed in the scope of the Antigona Project, an electronic-commerce system using a speech interface (Speech to Text plus Text To Speech, the latter based on a time concatenation technique) for EP language. The purpose of our work is to contribute with practical strategies in order to improve synthetic speech quality and naturalness, concerning prosodic processing. It is also our goal to show that syntactic structures strongly determine prosody patterns in EP. It is also important to emphasize the pragmatic commercial objective of this system, which is selling a product. Therefore, this type of application deals with a specific vocabulary choice, it is displayed in predictable syntactic constructions and sentences, making prosodic contours and focus become expected. This study was held in intimate articulation between the engineering experience and tools and the linguistic approach. We believe that this work represents an important achievement for future research on synthetic speech processing in particular for EP. Moreover, it can be applied to other Romanic languages, regarding their syntactic resemblances

    Explaining the PENTA model: a reply to Arvaniti and Ladd

    Get PDF
    This paper presents an overview of the Parallel Encoding and Target Approximation (PENTA) model of speech prosody, in response to an extensive critique by Arvaniti & Ladd (2009). PENTA is a framework for conceptually and computationally linking communicative meanings to fine-grained prosodic details, based on an articulatory-functional view of speech. Target Approximation simulates the articulatory realisation of underlying pitch targets – the prosodic primitives in the framework. Parallel Encoding provides an operational scheme that enables simultaneous encoding of multiple communicative functions. We also outline how PENTA can be computationally tested with a set of software tools. With the help of one of the tools, we offer a PENTA-based hypothetical account of the Greek intonational patterns reported by Arvaniti & Ladd, showing how it is possible to predict the prosodic shapes of an utterance based on the lexical and postlexical meanings it conveys

    Nonparallel Emotional Speech Conversion

    Full text link
    We propose a nonparallel data-driven emotional speech conversion method. It enables the transfer of emotion-related characteristics of a speech signal while preserving the speaker's identity and linguistic content. Most existing approaches require parallel data and time alignment, which is not available in most real applications. We achieve nonparallel training based on an unsupervised style transfer technique, which learns a translation model between two distributions instead of a deterministic one-to-one mapping between paired examples. The conversion model consists of an encoder and a decoder for each emotion domain. We assume that the speech signal can be decomposed into an emotion-invariant content code and an emotion-related style code in latent space. Emotion conversion is performed by extracting and recombining the content code of the source speech and the style code of the target emotion. We tested our method on a nonparallel corpora with four emotions. Both subjective and objective evaluations show the effectiveness of our approach.Comment: Published in INTERSPEECH 2019, 5 pages, 6 figures. Simulation available at http://www.jian-gao.org/emoga

    Speech Synthesis Based on Hidden Markov Models

    Get PDF
    corecore