144 research outputs found

    Performance Evaluation of The Speaker-Independent HMM-based Speech Synthesis System "HTS-2007" for the Blizzard Challenge 2007

    Get PDF
    This paper describes a speaker-independent/adaptive HMM-based speech synthesis system developed for the Blizzard Challenge 2007. The new system, named HTS-2007, employs speaker adaptation (CSMAPLR+MAP), feature-space adaptive training, mixed-gender modeling, and full-covariance modeling using CSMAPLR transforms, in addition to several other techniques that have proved effective in our previous systems. Subjective evaluation results show that the new system generates significantly better quality synthetic speech than that of speaker-dependent approaches with realistic amounts of speech data, and that it bears comparison with speaker-dependent approaches even when large amounts of speech data are available

    Unsupervised Cross-lingual Speaker Adaptation for HMM-based Speech Synthesis

    Get PDF
    In the EMIME project, we are developing a mobile device that performs personalized speech-to-speech translation such that a user's spoken input in one language is used to produce spoken output in another language, while continuing to sound like the user's voice. We integrate two techniques, unsupervised adaptation for HMM-based TTS using a word-based large-vocabulary continuous speech recognizer and cross-lingual speaker adaptation for HMM-based TTS, into a single architecture. Thus, an unsupervised cross-lingual speaker adaptation system can be developed. Listening tests show very promising results, demonstrating that adapted voices sound similar to the target speaker and that differences between supervised and unsupervised cross-lingual speaker adaptation are small

    The HTS-2008 System: Yet Another Evaluation of the Speaker-Adaptive HMM-based Speech Synthesis System in The 2008 Blizzard Challenge

    Get PDF
    For the 2008 Blizzard Challenge, we used the same speaker-adaptive approach to HMM-based speech synthesis that was used in the HTS entry to the 2007 challenge, but an improved system was built in which the multi-accented English average voice model was trained on 41 hours of speech data with high-order mel-cepstral analysis using an efficient forward-backward algorithm for the HSMM. The listener evaluation scores for the synthetic speech generated from this system was much better than in 2007: the system had the equal best naturalness on the small English data set and the equal best intelligibility on both small and large data sets for English, and had the equal best naturalness on the Mandarin data. In fact, the English system was found to be as intelligible as human speech

    MR-conditional Robotic Actuation of Concentric Tendon-Driven Cardiac Catheters

    Full text link
    Atrial fibrillation (AF) and ventricular tachycardia (VT) are two of the sustained arrhythmias that significantly affect the quality of life of patients. Treatment of AF and VT often requires radiofrequency ablation of heart tissues using an ablation catheter. Recent progress in ablation therapy leverages magnetic resonance imaging (MRI) for higher contrast visual feedback, and additionally utilizes a guiding sheath with an actively deflectable tip to improve the dexterity of the catheter inside the heart. This paper presents the design and validation of an MR-conditional robotic module for automated actuation of both the ablation catheter and the sheath. The robotic module features a compact design for improved accessibility inside the MR scanner bore and is driven by piezoelectric motors to ensure MR-conditionality. The combined catheter-sheath mechanism is essentially a concentric tendon-driven continuum robot and its kinematics is modeled by the constant curvature model for closed-loop position control. Path following experiments were conducted to validate the actuation module and control scheme, achieving < 2 mm average tip position error.Comment: 7 pages, 7 figures, submitted to IEEE ISMR 202

    Speaker-Independent HMM-based Speech Synthesis System

    Get PDF
    This paper describes an HMM-based speech synthesis system developed by the HTS working group for the Blizzard Challenge 2007. To further explore the potential of HMM-based speech synthesis, we incorporate new features in our conventional system which underpin a speaker-independent approach: speaker adaptation techniques; adaptive training for HSMMs; and full covariance modeling using the CSMAPLR transforms
    corecore