3 research outputs found

    Fewer-token Neural Speech Codec with Time-invariant Codes

    Full text link
    Language model based text-to-speech (TTS) models, like VALL-E, have gained attention for their outstanding in-context learning capability in zero-shot scenarios. Neural speech codec is a critical component of these models, which can convert speech into discrete token representations. However, excessive token sequences from the codec may negatively affect prediction accuracy and restrict the progression of Language model based TTS models. To address this issue, this paper proposes a novel neural speech codec with time-invariant codes named TiCodec. By encoding and quantizing time-invariant information into a separate code, TiCodec can reduce the amount of frame-level information that needs encoding, effectively decreasing the number of tokens as codes of speech. Furthermore, this paper introduces a time-invariant encoding consistency loss to enhance the consistency of time-invariant code within an utterance and force it to capture more global information, which can benefit the zero-shot TTS task. Experimental results demonstrate that TiCodec can not only enhance the quality of reconstruction speech with fewer tokens but also increase the similarity and naturalness, as well as reduce the word error rate of the synthesized speech by the TTS model.Comment: Accepted by ICASSP 202

    Expressive Search on Encrypted Data

    Get PDF
    Different from the traditional public key encryption, search-able public key encryption allows a data owner to encrypt his data under a user’s public key in such a way that the user can generate search token keys using her secret key and then query an encryption storage server. On receiving such a search token key, the server filters all or related stored encryptions and returns matched ones as response. Searchable pubic key encryption has many promising applications. Unfortunately, existing schemes either only support simple query predicates, such as equality queries and conjunctive queries, or have a superpolynomial blowup in ciphertext size and search token key size

    ADD 2023: the Second Audio Deepfake Detection Challenge

    Full text link
    Audio deepfake detection is an emerging topic in the artificial intelligence community. The second Audio Deepfake Detection Challenge (ADD 2023) aims to spur researchers around the world to build new innovative technologies that can further accelerate and foster research on detecting and analyzing deepfake speech utterances. Different from previous challenges (e.g. ADD 2022), ADD 2023 focuses on surpassing the constraints of binary real/fake classification, and actually localizing the manipulated intervals in a partially fake speech as well as pinpointing the source responsible for generating any fake audio. Furthermore, ADD 2023 includes more rounds of evaluation for the fake audio game sub-challenge. The ADD 2023 challenge includes three subchallenges: audio fake game (FG), manipulation region location (RL) and deepfake algorithm recognition (AR). This paper describes the datasets, evaluation metrics, and protocols. Some findings are also reported in audio deepfake detection tasks
    corecore