2,014 research outputs found

    Singing voice correction using canonical time warping

    Full text link
    Expressive singing voice correction is an appealing but challenging problem. A robust time-warping algorithm which synchronizes two singing recordings can provide a promising solution. We thereby propose to address the problem by canonical time warping (CTW) which aligns amateur singing recordings to professional ones. A new pitch contour is generated given the alignment information, and a pitch-corrected singing is synthesized back through the vocoder. The objective evaluation shows that CTW is robust against pitch-shifting and time-stretching effects, and the subjective test demonstrates that CTW prevails the other methods including DTW and the commercial auto-tuning software. Finally, we demonstrate the applicability of the proposed method in a practical, real-world scenario

    Nonparallel Emotional Speech Conversion

    Full text link
    We propose a nonparallel data-driven emotional speech conversion method. It enables the transfer of emotion-related characteristics of a speech signal while preserving the speaker's identity and linguistic content. Most existing approaches require parallel data and time alignment, which is not available in most real applications. We achieve nonparallel training based on an unsupervised style transfer technique, which learns a translation model between two distributions instead of a deterministic one-to-one mapping between paired examples. The conversion model consists of an encoder and a decoder for each emotion domain. We assume that the speech signal can be decomposed into an emotion-invariant content code and an emotion-related style code in latent space. Emotion conversion is performed by extracting and recombining the content code of the source speech and the style code of the target emotion. We tested our method on a nonparallel corpora with four emotions. Both subjective and objective evaluations show the effectiveness of our approach.Comment: Published in INTERSPEECH 2019, 5 pages, 6 figures. Simulation available at http://www.jian-gao.org/emoga

    LSTM based voice conversion for laryngectomees

    Get PDF
    This paper describes a voice conversion system designed withthe aim of improving the intelligibility and pleasantness of oe-sophageal voices. Two different systems have been built, oneto transform the spectral magnitude and another one for thefundamental frequency, both based on DNNs. Ahocoder hasbeen used to extract the spectral information (mel cepstral co-efficients) and a specific pitch extractor has been developed tocalculate the fundamental frequency of the oesophageal voices.The cepstral coefficients are converted by means of an LSTMnetwork. The conversion of the intonation curve is implementedthrough two different LSTM networks, one dedicated to thevoiced unvoiced detection and another one for the predictionof F0 from the converted cepstral coefficients. The experi-ments described here involve conversion from one oesophagealspeaker to a specific healthy voice. The intelligibility of thesignals has been measured with a Kaldi based ASR system. Apreference test has been implemented to evaluate the subjectivepreference of the obtained converted voices comparing themwith the original oesophageal voice. The results show that spec-tral conversion improves ASR while restoring the intonation ispreferred by human listenersThis work has been partially funded by the Spanish Ministryof Economy and Competitiveness with FEDER support (RE-STORE project, TEC2015-67163-C2-1-R), the Basque Govern-ment (BerbaOla project, KK-2018/00014) and from the Euro-pean Unions H2020 research and innovation programme un-der the Marie Curie European Training Network ENRICH(675324)
    corecore