3 research outputs found

    Technology Pipeline for Large Scale Cross-Lingual Dubbing of Lecture Videos into Multiple Indian Languages

    Full text link
    Cross-lingual dubbing of lecture videos requires the transcription of the original audio, correction and removal of disfluencies, domain term discovery, text-to-text translation into the target language, chunking of text using target language rhythm, text-to-speech synthesis followed by isochronous lipsyncing to the original video. This task becomes challenging when the source and target languages belong to different language families, resulting in differences in generated audio duration. This is further compounded by the original speaker's rhythm, especially for extempore speech. This paper describes the challenges in regenerating English lecture videos in Indian languages semi-automatically. A prototype is developed for dubbing lectures into 9 Indian languages. A mean-opinion-score (MOS) is obtained for two languages, Hindi and Tamil, on two different courses. The output video is compared with the original video in terms of MOS (1-5) and lip synchronisation with scores of 4.09 and 3.74, respectively. The human effort also reduces by 75%

    Akshara transcription of mrudangam strokes in carnatic music

    No full text
    Percussion instruments play a significant role in/nCarnatic music concerts. The percussion artist enjoys a great/ndegree of freedom in improvising within the defined tala/nstructure/nof a composition. The objective of this paper is to transcribe the/nimprovisations, treating the percussion strokes as syllables or/naksharas./nOnset detection is performed to segment the waveform at/neach/nakshara/n. Using the transcriptions from the training data,/na three-state Hidden Markov Model is built for each/nakshara/n./nThe language model is derived from the training data. Testing/nis also performed isolated style using onset detection to segment/nthe phrase, and the language model to correct the transcription./nTranscription is performed on both concert recordings and studio/nrecordings. This technique yields upto/n≈/n96%/naccuracy on studio/nrecordings and/n≈/n76%/naccuracy for concert recordings./nAs the mrudangam/n1/nis an instrument that is based on tonic;/ntonic normalised features, namely, Cent Filterbank Cepstral/ncoefficients are used. It is shown that tonic normalisation helps/nin transcription across different tonics.This research was partly funded by the European Research/nCouncil under the European Unions Seventh Framework Pro-/ngram, as part of the CompMusic project (ERC grant agreement/n267583)
    corecore