78 research outputs found

    ABI4 Mediates Antagonistic Effects of Abscisic Acid and Gibberellins at Transcript and Protein Levels

    Get PDF
    Abscisic acid (ABA) and gibberellins (GA) are plant hormones which antagonistically mediate numerous physiological processes, and their optimal balance is essential for normal plant development. However, the molecular mechanism underlying ABA and GA antagonism still needs to be determined. Here, we report that ABA- INSENSITIVE 4 (ABI4) is a central factor for GA/ABA homeostasis and antagonism in post-germination stages. ABI4 over-expression in Arabidopsis (OE-ABI4) leads to developmental defects including a decrease in plant height and poor seed production. The transcription of a key ABA biosynthetic gene, NCED6, and of a key GA catabolic gene, GA2ox7, is significantly enhanced by ABI4 over-expression. ABI4 activates NCED6 and GA2ox7 transcription by directly binding to the promoters, and genetic analysis revealed that mutation in these two genes partially rescues the dwarf phenotype of ABI4 overexpressing plants. Consistently, ABI4 overexpressing seedlings have a lower GA/ABA ratio compared to the wild type. We further show that ABA induces GA2ox7 transcription while GA represses NCED6 expression in an ABI4-dependent manner; and that ABA stabilizes the ABI4 protein, whereas GA promotes its degradation. Taken together, these results propose that ABA and GA antagonize each other by oppositely acting on ABI4 transcript and protein levels

    Assessment of the spatial and temporal variations of water quality for agricultural lands with crop rotation in China by using a HYPE model

    Get PDF
    Many water quality models have been successfully used worldwide to predict nutrient losses from anthropogenically impacted catchments, but hydrological and nutrient simulations with little data are difficult considering the transfer of model parameters and complication of model calibration and validation. This study aims (i) to assess the performance capabilities of a new and relatively more advantageous model-hydrological predictions for the environment (HYPE) to simulate stream flow and nutrient load in ungauged agricultural areas by using a multi-site and multi-objective parameter calibration method and (ii) to investigate the temporal and spatial variations of total nitrogen (TN) and total phosphorous (TP) concentrations and loads with crop rotation using the model for the first time. A parameter estimation tool (PEST) was used to calibrate parameters, which shows that the parameters related to the effective soil porosity were most sensitive to hydrological modeling. N balance was largely controlled by soil denitrification processes, whereas P balance was influenced by the sedimentation rate and production/decay of P in rivers and lakes. The model reproduced the temporal and spatial variations of discharge and TN/TP relatively well in both calibration (2006–2008) and validation (2009–2010) periods. The lowest NSEs (Nash-Suttclife Efficiency) of discharge, daily TN load, and daily TP load were 0.74, 0.51, and 0.54, respectively. The seasonal variations of daily TN concentrations in the entire simulation period were insufficient, indicated that crop rotation changed the timing and amount of N output. Monthly TN and TP simulation yields revealed that nutrient outputs were abundant in summer in terms of the corresponding discharge. The area-weighted TN and TP load annual yields in five years showed that nutrient loads were extremely high along Hong and Ru rivers, especially in agricultural lands

    Multiplex genomic structure variation mediated by TALEN and ssODN

    Get PDF
    BACKGROUND: Genomic structure variation (GSV) is widely distributed in various organisms and is an important contributor to human diversity and disease susceptibility. Efficient approaches to induce targeted genomic structure variation are crucial for both analytic and therapeutic studies of GSV. Here, we presented an efficient strategy to induce targeted GSV including chromosomal deletions, duplications and inversions in a precise manner. RESULTS: Utilizing Transcription Activator-Like Effector Nucleases (TALEN) designed to target two distinct sites, we demonstrated targeted deletions, duplications and inversions of an 8.9 Mb chromosomal segment, which is about one third of the entire chromosome. We developed a novel method by combining TALEN-induced GSV and single stranded oligodeoxynucleotide (ssODN) mediated gene modifications to reduce unwanted mutations occurring during the targeted GSV using TALEN or Zinc finger nuclease (ZFN). Furthermore, we showed that co-introduction of TALEN and ssODN generated unwanted complex structure variation other than the expected chromosomal deletion. CONCLUSIONS: We demonstrated the ability of TALEN to induce targeted GSV and provided an efficient strategy to perform GSV precisely. Furthermore, it is the first time to show that co-introduction of TALEN and ssODN generated unwanted complex structure variation. It is plausible to believe that the strategies developed in this study can be applied to other organisms, and will help understand the biological roles of GSV and therapeutic applications of TALEN and ssODN. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/1471-2164-15-41) contains supplementary material, which is available to authorized users

    SpeechX: Neural Codec Language Model as a Versatile Speech Transformer

    Full text link
    Recent advancements in generative speech models based on audio-text prompts have enabled remarkable innovations like high-quality zero-shot text-to-speech. However, existing models still face limitations in handling diverse audio-text speech generation tasks involving transforming input speech and processing audio captured in adverse acoustic conditions. This paper introduces SpeechX, a versatile speech generation model capable of zero-shot TTS and various speech transformation tasks, dealing with both clean and noisy signals. SpeechX combines neural codec language modeling with multi-task learning using task-dependent prompting, enabling unified and extensible modeling and providing a consistent way for leveraging textual input in speech enhancement and transformation tasks. Experimental results show SpeechX's efficacy in various tasks, including zero-shot TTS, noise suppression, target speaker extraction, speech removal, and speech editing with or without background noise, achieving comparable or superior performance to specialized models across tasks. See https://aka.ms/speechx for demo samples.Comment: See https://aka.ms/speechx for demo sample

    SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data

    Full text link
    How to boost speech pre-training with textual data is an unsolved problem due to the fact that speech and text are very different modalities with distinct characteristics. In this paper, we propose a cross-modal Speech and Language Model (SpeechLM) to explicitly align speech and text pre-training with a pre-defined unified discrete representation. Specifically, we introduce two alternative discrete tokenizers to bridge the speech and text modalities, including phoneme-unit and hidden-unit tokenizers, which can be trained using a small amount of paired speech-text data. Based on the trained tokenizers, we convert the unlabeled speech and text data into tokens of phoneme units or hidden units. The pre-training objective is designed to unify the speech and the text into the same discrete semantic space with a unified Transformer network. Leveraging only 10K text sentences, our SpeechLM gets a 16\% relative WER reduction over the best base model performance (from 6.8 to 5.7) on the public LibriSpeech ASR benchmark. Moreover, SpeechLM with fewer parameters even outperforms previous SOTA models on CoVoST-2 speech translation tasks. We also evaluate our SpeechLM on various spoken language processing tasks under the universal representation evaluation framework SUPERB, demonstrating significant improvements on content-related tasks. Our code and models are available at https://aka.ms/SpeechLM.Comment: 14 page

    WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

    Full text link
    Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. To tackle the problem, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM jointly learns masked speech prediction and denoising in pre-training. By this means, WavLM does not only keep the speech content modeling capability by the masked speech prediction, but also improves the potential to non-ASR tasks by the speech denoising. In addition, WavLM employs gated relative position bias for the Transformer structure to better capture the sequence ordering of input speech. We also scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. The code and pre-trained models are available at https://aka.ms/wavlm.Comment: Submitted to the Journal of Selected Topics in Signal Processing (JSTSP
    corecore