9 research outputs found

    Evaluation of Speaker Verification Security and Detection of HMM-Based Synthetic Speech

    Get PDF

    Detection of synthetic speech for the problem of imposture

    Get PDF

    [multi’vocal]: reflections on engaging everyday people in the development of a collective non-binary synthesized voice

    Get PDF
    The growing field of Human-Computer Interaction (HCI) takes a step out from conventional screenbased interactions, creating new scenarios, in which voice synthesis and voice recognition become important elements. Such voices are commonly created through concatenative or parametric synthesis methods, which access large voice corpora, pre-recorded by a single professional voice actor. These designed voices arguably propagate representations of gender binary identities. In this paper we present our project, [multi’vocal], which aims to challenge the current gender binary representations in synthesized voices. More specifically we explore if it is possible to create a non-binary synthesized voice through engaging everyday people of diverse backgrounds in giving voice to a collective synthesized voice of all genders, ages and accents

    Speech Watermarking Based on Coding of the Harmonic Phase

    No full text

    Voice Presentation Attack Detection Using Convolutional Neural Networks

    No full text
    Current state-of-the-art automatic speaker verification (ASV) systems are prone to spoofing. The security and reliability of ASV systems can be threatened by different types of spoofing attacks using voice conversion, synthetic speech, or recorded passphrase. It is therefore essential to develop countermeasure techniques which can detect such spoofed speech. Inspired by the success of deep learning approaches in various classification tasks, this work presents an in-depth study of convolutional neural networks (CNNs) for spoofing detection in automatic speaker verification (ASV) systems. Specifically, we have compared the use of three different CNNs architectures: AlexNet, CNNs with max-feature-map activation, and an ensemble of standard CNNs for developing spoofing countermeasures, and discussed their potential to avoid overfitting due to small amounts of training data that is usually available in this task. We used popular deep learning toolkits for the system implementation and have released the implementation code of our methods publicly. We have evaluated the proposed countermeasure systems for detecting replay attacks on recently released spoofing corpora ASVspoof 2017, and also provided in-depth visual analyses of CNNs to aid for future research in this area.</p

    A uniform phase representation for the harmonic model in speech synthesis applications

    Get PDF
    Feature-based vocoders, e.g., STRAIGHT, offer a way to manipulate the perceived characteristics of the speech signal in speech transformation and synthesis. For the harmonic model, which provide excellent perceived quality, features for the amplitude parameters already exist (e.g., Line Spectral Frequencies (LSF), Mel-Frequency Cepstral Coefficients (MFCC)). However, because of the wrapping of the phase parameters, phase features are more difficult to design. To randomize the phase of the harmonic model during synthesis, a voicing feature is commonly used, which distinguishes voiced and unvoiced segments. However, voice production allows smooth transitions between voiced/unvoiced states which makes voicing segmentation sometimes tricky to estimate. In this article, two-phase features are suggested to represent the phase of the harmonic model in a uniform way, without voicing decision. The synthesis quality of the resulting vocoder has been evaluated, using subjective listening tests, in the context of resynthesis, pitch scaling, and Hidden Markov Model (HMM)-based synthesis. The experiments show that the suggested signal model is comparable to STRAIGHT or even better in some scenarios. They also reveal some limitations of the harmonic framework itself in the case of high fundamental frequencies.G. Degottex has been funded by the Swiss National Science Foundation (SNSF) (grants PBSKP2_134325, PBSKP2_140021), Switzerland, and the Foundation for Research and Technology-Hellas (FORTH), Heraklion, Greece. D. Erro has been funded by the Basque Government (BER2TEK, IE12-333) and the Spanish Ministry of Economy and Competitiveness (SpeechTech4All, TEC2012-38939-C03-03)
    corecore