40 research outputs found

    Evaluation of putative reference genes for gene expression normalization in soybean by quantitative real-time RT-PCR

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Real-time quantitative reverse transcription PCR (RT-qPCR) data needs to be normalized for its proper interpretation. Housekeeping genes are routinely employed for this purpose, but their expression level cannot be assumed to remain constant under all possible experimental conditions. Thus, a systematic validation of reference genes is required to ensure proper normalization. For soybean, only a small number of validated reference genes are available to date.</p> <p>Results</p> <p>A systematic comparison of 14 potential reference genes for soybean is presented. These included seven commonly used (<it>ACT2, ACT11, TUB4, TUA5, CYP, UBQ10, EF1b</it>) and seven new candidates (<it>SKIP16, MTP, PEPKR1, HDC, TIP41, UKN1, UKN2</it>). Expression stability was examined by RT-qPCR across 116 biological samples, representing tissues at various developmental stages, varied photoperiodic treatments, and a range of soybean cultivars. Expression of all 14 genes was variable to some extent, but that of <it>SKIP16, UKN1 </it>and <it>UKN2 </it>was overall the most stable. A combination of <it>ACT11, UKN1 </it>and <it>UKN2 </it>would be appropriate as a reference panel for normalizing gene expression data among different tissues, whereas the combination SKIP16, UKN1 and MTP was most suitable for developmental stages. <it>ACT11, TUA5 </it>and <it>TIP41 </it>were the most stably expressed when the photoperiod was altered, and <it>TIP41, UKN1 </it>and <it>UKN2 </it>when the light quality was changed. For six different cultivars in long day (LD) and short day (SD), their expression stability did not vary significantly with <it>ACT11, UKN2 </it>and <it>TUB4 </it>being the most stable genes. The relative gene expression level of <it>GmFTL3</it>, an ortholog of Arabidopsis <it>FT </it>(<it>FLOWERING LOCUS T</it>) was detected to validate the reference genes selected in this study.</p> <p>Conclusion</p> <p>None of the candidate reference genes was uniformly expressed across all experimental conditions, and the most suitable reference genes are conditional-, tissue-specific-, developmental-, and cultivar-dependent. Most of the new reference genes performed better than the conventional housekeeping genes. These results should guide the selection of reference genes for gene expression studies in soybean.</p

    System Fingerprint Recognition for Deepfake Audio: An Initial Dataset and Investigation

    Full text link
    The malicious use of deep speech synthesis models may pose significant threat to society. Therefore, many studies have emerged to detect the so-called ``deepfake audio". However, these studies focus on the binary detection of real audio and fake audio. For some realistic application scenarios, it is needed to know what tool or model generated the deepfake audio. This raises a question: Can we recognize the system fingerprints of deepfake audio? Therefore, in this paper, we propose a deepfake audio dataset for system fingerprint recognition (SFR) and conduct an initial investigation. We collected the dataset from five speech synthesis systems using the latest state-of-the-art deep learning technologies, including both clean and compressed sets. In addition, to facilitate the further development of system fingerprint recognition methods, we give researchers some benchmarks that can be compared, and research findings. The dataset will be publicly available.Comment: 12 pages, 3 figures. arXiv admin note: text overlap with arXiv:2208.0964

    Low-rank Adaptation Method for Wav2vec2-based Fake Audio Detection

    Full text link
    Self-supervised speech models are a rapidly developing research topic in fake audio detection. Many pre-trained models can serve as feature extractors, learning richer and higher-level speech features. However,when fine-tuning pre-trained models, there is often a challenge of excessively long training times and high memory consumption, and complete fine-tuning is also very expensive. To alleviate this problem, we apply low-rank adaptation(LoRA) to the wav2vec2 model, freezing the pre-trained model weights and injecting a trainable rank-decomposition matrix into each layer of the transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared with fine-tuning with Adam on the wav2vec2 model containing 317M training parameters, LoRA achieved similar performance by reducing the number of trainable parameters by 198 times.Comment: 6page

    Learning to Behave Like Clean Speech: Dual-Branch Knowledge Distillation for Noise-Robust Fake Audio Detection

    Full text link
    Most research in fake audio detection (FAD) focuses on improving performance on standard noise-free datasets. However, in actual situations, there is usually noise interference, which will cause significant performance degradation in FAD systems. To improve the noise robustness, we propose a dual-branch knowledge distillation fake audio detection (DKDFAD) method. Specifically, a parallel data flow of the clean teacher branch and the noisy student branch is designed, and interactive fusion and response-based teacher-student paradigms are proposed to guide the training of noisy data from the data distribution and decision-making perspectives. In the noise branch, speech enhancement is first introduced for denoising, which reduces the interference of strong noise. The proposed interactive fusion combines denoising features and noise features to reduce the impact of speech distortion and seek consistency with the data distribution of clean branch. The teacher-student paradigm maps the student's decision space to the teacher's decision space, making noisy speech behave as clean. In addition, a joint training method is used to optimize the two branches to achieve global optimality. Experimental results based on multiple datasets show that the proposed method performs well in noisy environments and maintains performance in cross-dataset experiments

    Learning Speech Representation From Contrastive Token-Acoustic Pretraining

    Full text link
    For fine-grained generation and recognition tasks such as minimally-supervised text-to-speech (TTS), voice conversion (VC), and automatic speech recognition (ASR), the intermediate representations extracted from speech should serve as a "bridge" between text and acoustic information, containing information from both modalities. The semantic content is emphasized, while the paralinguistic information such as speaker identity and acoustic details should be de-emphasized. However, existing methods for extracting fine-grained intermediate representations from speech suffer from issues of excessive redundancy and dimension explosion. Contrastive learning is a good method for modeling intermediate representations from two modalities. However, existing contrastive learning methods in the audio field focus on extracting global descriptive information for downstream audio classification tasks, making them unsuitable for TTS, VC, and ASR tasks. To address these issues, we propose a method named "Contrastive Token-Acoustic Pretraining (CTAP)", which uses two encoders to bring phoneme and speech into a joint multimodal space, learning how to connect phoneme and speech at the frame level. The CTAP model is trained on 210k speech and phoneme text pairs, achieving minimally-supervised TTS, VC, and ASR. The proposed CTAP method offers a promising solution for fine-grained generation and recognition downstream tasks in speech processing

    CFAD: A Chinese Dataset for Fake Audio Detection

    Full text link
    Fake audio detection is a growing concern and some relevant datasets have been designed for research. However, there is no standard public Chinese dataset under complex conditions.In this paper, we aim to fill in the gap and design a Chinese fake audio detection dataset (CFAD) for studying more generalized detection methods. Twelve mainstream speech-generation techniques are used to generate fake audio. To simulate the real-life scenarios, three noise datasets are selected for noise adding at five different signal-to-noise ratios, and six codecs are considered for audio transcoding (format conversion). CFAD dataset can be used not only for fake audio detection but also for detecting the algorithms of fake utterances for audio forensics. Baseline results are presented with analysis. The results that show fake audio detection methods with generalization remain challenging. The CFAD dataset is publicly available at: https://zenodo.org/record/8122764.Comment: FAD renamed as CFA

    Minimally-Supervised Speech Synthesis with Conditional Diffusion Model and Language Model: A Comparative Study of Semantic Coding

    Full text link
    Recently, there has been a growing interest in text-to-speech (TTS) methods that can be trained with minimal supervision by combining two types of discrete speech representations and using two sequence-to-sequence tasks to decouple TTS. To address the challenges associated with high dimensionality and waveform distortion in discrete representations, we propose Diff-LM-Speech, which models semantic embeddings into mel-spectrogram based on diffusion models and introduces a prompt encoder structure based on variational autoencoders and prosody bottlenecks to improve prompt representation capabilities. Autoregressive language models often suffer from missing and repeated words, while non-autoregressive frameworks face expression averaging problems due to duration prediction models. To address these issues, we propose Tetra-Diff-Speech, which designs a duration diffusion model to achieve diverse prosodic expressions. While we expect the information content of semantic coding to be between that of text and acoustic coding, existing models extract semantic coding with a lot of redundant information and dimensionality explosion. To verify that semantic coding is not necessary, we propose Tri-Diff-Speech. Experimental results show that our proposed methods outperform baseline methods. We provide a website with audio samples

    Chinese Open Instruction Generalist: A Preliminary Release

    Full text link
    Instruction tuning is widely recognized as a key technique for building generalist language models, which has attracted the attention of researchers and the public with the release of InstructGPT~\citep{ouyang2022training} and ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress in English-oriented large-scale language models (LLMs), it is still under-explored whether English-based foundation LLMs can perform similarly on multilingual tasks compared to English tasks with well-designed instruction tuning and how we can construct the corpora needed for the tuning. To remedy this gap, we propose the project as an attempt to create a Chinese instruction dataset by various methods adapted to the intrinsic characteristics of 4 sub-tasks. We collect around 200k Chinese instruction tuning samples, which have been manually checked to guarantee high quality. We also summarize the existing English and Chinese instruction corpora and briefly describe some potential applications of the newly constructed Chinese instruction corpora. The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction \textbf{G}eneralist (\textbf{COIG}) corpora are available in Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and Github\footnote{\url{https://github.com/FlagOpen/FlagInstruct}}, and will be continuously updated

    An Overview of Affective Speech Synthesis and Conversion in the Deep Learning Era

    Get PDF
    Speech is the fundamental mode of human communication, and its synthesis has long been a core priority in human-computer interaction research. In recent years, machines have managed to master the art of generating speech that is understandable by humans. But the linguistic content of an utterance encompasses only a part of its meaning. Affect, or expressivity, has the capacity to turn speech into a medium capable of conveying intimate thoughts, feelings, and emotions -- aspects that are essential for engaging and naturalistic interpersonal communication. While the goal of imparting expressivity to synthesised utterances has so far remained elusive, following recent advances in text-to-speech synthesis, a paradigm shift is well under way in the fields of affective speech synthesis and conversion as well. Deep learning, as the technology which underlies most of the recent advances in artificial intelligence, is spearheading these efforts. In the present overview, we outline ongoing trends and summarise state-of-the-art approaches in an attempt to provide a comprehensive overview of this exciting field.Comment: Submitted to the Proceedings of IEE

    On the Effectiveness of Speech Self-supervised Learning for Music

    Full text link
    Self-supervised learning (SSL) has shown promising results in various speech and natural language processing applications. However, its efficacy in music information retrieval (MIR) still remains largely unexplored. While previous SSL models pre-trained on music recordings may have been mostly closed-sourced, recent speech models such as wav2vec2.0 have shown promise in music modelling. Nevertheless, research exploring the effectiveness of applying speech SSL models to music recordings has been limited. We explore the music adaption of SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and refer to them as music2vec and musicHuBERT, respectively. We train 1212 SSL models with 95M parameters under various pre-training configurations and systematically evaluate the MIR task performances with 13 different MIR tasks. Our findings suggest that training with music data can generally improve performance on MIR tasks, even when models are trained using paradigms designed for speech. However, we identify the limitations of such existing speech-oriented designs, especially in modelling polyphonic information. Based on the experimental results, empirical suggestions are also given for designing future musical SSL strategies and paradigms
    corecore