732 research outputs found

    Regulation of the \u3cem\u3eEscherichia coli\u3c/em\u3e Tryptophan Operon by Early Reactions in the Aromatic Pathway

    Get PDF
    7-Methyltryptophan (7MT) or compounds which can be metabolized to 7MT, 3-methylanthranilic acid (3MA) and 7-methylindole, cause derepression of the trp operon through feedback inhibition of anthranilate synthetase. Tyrosine reverses 3MA or 7-methylindole derepression, apparently by increasing the amount of chorismic acid available to the tryptophan pathway. A mutant isolated on the basis of 3MA resistance (MAR 13) was found to excrete small amounts of chorismic acid and to have a feedback-resistant phenylalanine 3-deoxy-d-arabinoheptulosonic acid-7-phosphate (DAHP) synthetase. Genetic evidence indicates that the mutation conferring 3MA resistance and feedback resistance is very closely linked to aroG, the structural gene for the DAHP synthetase (phe). Since feedback inhibition of anthranilate synthetase by l-tryptophan (or 7MT) is competitive with chorismic acid, alterations in growth conditions (added tyrosine) or in a mutant (MAR 13) which increase the amount of chorismic acid available to the tryptophan pathway result in resistance to 7MT derepression. Owing to this competitive nature of tryptophan feedback inhibition of anthranilate synthetase by chorismic acid, the early pathway apparently serves to exert a regulatory influence on tryptophan biosynthesis

    Mechanism of 3-Methylanthranilic Acid Derepression of the Tryptophan Operon in \u3cem\u3eEscherichia coli\u3c/em\u3e

    Get PDF
    3-Methylanthranilic acid (3MA) inhibits growth and causes derepression of the tryptophan biosynthetic enzymes in wild-type strains of Escherichia coli. Previous reports attributed this effect to an inhibition of the conversion of 1-(o-carboxyphenylamino)-1-deoxyribulose 5-phosphate to indole-3-glycerol phosphate and a consequent reduction in the concentration of endogenous tryptophan. Our studies have shown that 3MA-resistant mutants linked to the tryptophan operon have a feedback-resistant anthranilate synthetase; mutants with an altered indole-3-glycerol phosphate synthetase were not found. 3MA or 7-methylindole can be metabolized to 7-methyltryptophan, and 3MA, 7-methylindole, and 7-methyltryptophan lead to derepression of the tryptophan operon. Furthermore, 3MA-resistant mutants are also resistant to 7-methylindole derepression. These results strongly suggest that the primary cause of derepression by 3MA is through its conversion to 7-methyltryptophan, which can inhibit anthranilate synthetase, thereby decreasing the concentration of endogenous tryptophan. Unlike 5- or 6-methyltryptophan, 7-methyltryptophan does not appear to function as an active corepressor

    Sweet Ecstasy : Novelette

    Get PDF
    https://digitalcommons.library.umaine.edu/mmb-ps/2631/thumbnail.jp

    DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules

    Full text link
    Existing large language models (LLMs) that mainly focus on Standard American English (SAE) often lead to significantly worse performance when being applied to other English dialects. While existing mitigations tackle discrepancies for individual target dialects, they assume access to high-accuracy dialect identification systems. The boundaries between dialects are inherently flexible, making it difficult to categorize language into discrete predefined categories. In this paper, we propose DADA (Dialect Adaptation via Dynamic Aggregation), a modular approach to imbue SAE-trained models with multi-dialectal robustness by composing adapters which handle specific linguistic features. The compositional architecture of DADA allows for both targeted adaptation to specific dialect variants and simultaneous adaptation to various dialects. We show that DADA is effective for both single task and instruction finetuned language models, offering an extensible and interpretable framework for adapting existing LLMs to different English dialects

    Constrained Local Search for Last-Mile Routing

    Full text link
    Last-mile routing refers to the final step in a supply chain, delivering packages from a depot station to the homes of customers. At the level of a single van driver, the task is a traveling salesman problem. But the choice of route may be constrained by warehouse sorting operations, van-loading processes, driver preferences, and other considerations, rather than a straightforward minimization of tour length. We propose a simple and efficient penalty-based local-search algorithm for route optimization in the presence of such constraints, adopting a technique developed by Helsgaun to extend the LKH traveling salesman problem code to general vehicle-routing models. We apply his technique to handle combinations of constraints obtained from an analysis of historical routing data, enforcing properties that are desired in high-quality solutions. Our code is available under the open-source MIT license. An earlier version of the code received the $100,000 top prize in the Amazon Last Mile Routing Research Challenge organized in 2021

    You Haven\u27t Know Me Long Enough For That.

    Get PDF
    https://digitalcommons.library.umaine.edu/mmb-vp/3455/thumbnail.jp

    A Material Lens on Coloniality in NLP

    Full text link
    Coloniality, the continuation of colonial harms beyond "official" colonization, has pervasive effects across society and scientific fields. Natural Language Processing (NLP) is no exception to this broad phenomenon. In this work, we argue that coloniality is implicitly embedded in and amplified by NLP data, algorithms, and software. We formalize this analysis using Actor-Network Theory (ANT): an approach to understanding social phenomena through the network of relationships between human stakeholders and technology. We use our Actor-Network to guide a quantitative survey of the geography of different phases of NLP research, providing evidence that inequality along colonial boundaries increases as NLP builds on itself. Based on this, we argue that combating coloniality in NLP requires not only changing current values but also active work to remove the accumulation of colonial ideals in our foundational data and algorithms

    On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning

    Full text link
    Generating a Chain of Thought (CoT) has been shown to consistently improve large language model (LLM) performance on a wide range of NLP tasks. However, prior work has mainly focused on logical reasoning tasks (e.g. arithmetic, commonsense QA); it remains unclear whether improvements hold for more diverse types of reasoning, especially in socially situated contexts. Concretely, we perform a controlled evaluation of zero-shot CoT across two socially sensitive domains: harmful questions and stereotype benchmarks. We find that zero-shot CoT reasoning in sensitive domains significantly increases a model's likelihood to produce harmful or undesirable output, with trends holding across different prompt formats and model variants. Furthermore, we show that harmful CoTs increase with model size, but decrease with improved instruction following. Our work suggests that zero-shot CoT should be used with caution on socially important tasks, especially when marginalized groups or sensitive topics are involved.Comment: ACL 2023 Main Conferenc

    Multi-VALUE: A Framework for Cross-Dialectal English NLP

    Full text link
    Dialect differences caused by regional, social, and economic factors cause performance discrepancies for many groups of language technology users. Inclusive and equitable language technology must critically be dialect invariant, meaning that performance remains constant over dialectal shifts. Current systems often fall short of this ideal since they are designed and tested on a single dialect: Standard American English (SAE). We introduce a suite of resources for evaluating and achieving English dialect invariance. The resource is called Multi-VALUE, a controllable rule-based translation system spanning 50 English dialects and 189 unique linguistic features. Multi-VALUE maps SAE to synthetic forms of each dialect. First, we use this system to stress tests question answering, machine translation, and semantic parsing. Stress tests reveal significant performance disparities for leading models on non-standard dialects. Second, we use this system as a data augmentation technique to improve the dialect robustness of existing systems. Finally, we partner with native speakers of Chicano and Indian English to release new gold-standard variants of the popular CoQA task. To execute the transformation code, run model checkpoints, and download both synthetic and gold-standard dialectal benchmark datasets, see http://value-nlp.org.Comment: ACL 202

    DAMP: Doubly Aligned Multilingual Parser for Task-Oriented Dialogue

    Full text link
    Modern virtual assistants use internal semantic parsing engines to convert user utterances to actionable commands. However, prior work has demonstrated that semantic parsing is a difficult multilingual transfer task with low transfer efficiency compared to other tasks. In global markets such as India and Latin America, this is a critical issue as switching between languages is prevalent for bilingual users. In this work we dramatically improve the zero-shot performance of a multilingual and codeswitched semantic parsing system using two stages of multilingual alignment. First, we show that constrastive alignment pretraining improves both English performance and transfer efficiency. We then introduce a constrained optimization approach for hyperparameter-free adversarial alignment during finetuning. Our Doubly Aligned Multilingual Parser (DAMP) improves mBERT transfer performance by 3x, 6x, and 81x on the Spanglish, Hinglish and Multilingual Task Oriented Parsing benchmarks respectively and outperforms XLM-R and mT5-Large using 3.2x fewer parameters.Comment: 9 Pages; ACL Main Conference 202
    • …
    corecore