937 research outputs found

    Exploring efficient neural architectures for linguistic-acoustic mapping in text-to-speech

    Get PDF
    Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.Peer ReviewedPostprint (published version

    RawNet: Fast End-to-End Neural Vocoder

    Full text link
    Neural networks based vocoders have recently demonstrated the powerful ability to synthesize high quality speech. These models usually generate samples by conditioning on some spectrum features, such as Mel-spectrum. However, these features are extracted by using speech analysis module including some processing based on the human knowledge. In this work, we proposed RawNet, a truly end-to-end neural vocoder, which use a coder network to learn the higher representation of signal, and an autoregressive voder network to generate speech sample by sample. The coder and voder together act like an auto-encoder network, and could be jointly trained directly on raw waveform without any human-designed features. The experiments on the Copy-Synthesis tasks show that RawNet can achieve the comparative synthesized speech quality with LPCNet, with a smaller model architecture and faster speech generation at the inference step.Comment: Submitted to Interspeech 2019, Graz, Austri

    Audio Hiding based on Wavelet Transform and Linear Predictive Coding

    Get PDF
    In this work an efficient method for hiding a speech in audio is proposed. The features of secretspeech is extracted with LPC (Linear Predictive Coding), and these parameters embedded in audio inchaotic order. Discrete Wavelet Transform (DWT) is applied on audio frames to split the signal in high andlow frequencies. The embedding parameters are embedded in high frequency. The stego audio isperceptually indistinguishable from the equivalent cover audio. The proposed method allows hiding a sameduration of speech (secret) and audio (cover). The stego audio is subjected to objective tests such signal to noiseratio (SNR), signal to noise ratio segmental (SNRseg), Segmental Spectral SNR, Log Likelihood Ratio (LLR)and Correlation (Rxy) to determine the similarity with original audio
    • …
    corecore