HMM-Based Emotional Speech Synthesis Using Average Emotion Model

Abstract

Abstract. This paper presents a technique for synthesizing emotional speech based on an emotion-independent model which is called “average emotion” model. The average emotion model is trained using a multi-emotion speech da-tabase. Applying a MLLR-based model adaptation method, we can transform the average emotion model to present the target emotion which is not included in the training data. A multi-emotion speech database including four emotions, “neutral”, “happiness”, “sadness”, and “anger”, is used in our experiment. The results of subjective tests show that the average emotion model can effectively synthesize neutral speech and can be adapted to the target emotion model using very limited training data

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 22/03/2019