145 research outputs found

    Divergence of thioesterase function : human BFIT2, Escherichia coli EntH, and YDII

    Get PDF
    My doctoral research primarily focuses on two hotdog-fold thioesterases, EntH (also known as YbdB) from E. coli, and BFIT2 from Homo sapiens. The EntH (YbdB) gene is included in a large gene cluster that encodes the enzymes of the biosynthetic pathway leading to enterobactin. Building on the hypothesis that EntH might function in a house-keeping\u27 role by liberating misacylated EntB, two potential pathways to EntB misacylation were identified, one involving the phosphopantetheinyl transferase EntD and the other involving 2,3-DHB-AMP ligase EntE. EntH displays thioesterase activity towards a variety of acyl and aryl-holo EntB adducts. Lastly, It was shown that EntF acts on the 2,3-DHB-holo-EntB quickly, but not quickly on misacylated EntB adducts.tandem hotdog-fold thioesterase domains and a C-terminal steroidogenic acute regulatory protein related lipid transfer (START) domain. The expression of BFIT2 is induced during the thermogenesis transition of brown fat tissue. The expression of the recombinant BFIT2 in transfected HEK cells was confirmed by Western blot analysis. The recombinant BFIT2 contains a N-terminal His6-tag and epitope, which was found to be susceptible to posttranslational removal. The recombinant N-terminal (minus residues 1-34) truncated mutant was found not to undergo posttranslational cleavage, thus suggesting that the N-terminal region is a signal sequence. A chimeric protein BFIT2 N(1-42)-GFP was shown by confocal microscopy to co-locate with the mitochondria. The BFTI2 precursor was shown to be taken up by freshly isolated HEK cell mitochondria and cleaved to the mature form. These results confirmed that the N-terminal region of BFIT2 functions as MTS. During the thermogenesis transition of brown fat tissue, BFIT2 might function to restore the balance between free CoA and fatty acyl-CoA by hydrolyzing the long to medium chain fatty acyl-CoAs. Consistent with this hypothesis, BFIT2 was found to be much more active towards palmitoyl-CoA, myristoyl-CoA and lauroyl-CoA.\u2

    Reading Wikipedia to Answer Open-Domain Questions

    Full text link
    This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.Comment: ACL2017, 10 page

    Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors

    Full text link
    Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8%

    Learning Transformer Programs

    Full text link
    Recent research in mechanistic interpretability has attempted to reverse-engineer Transformer models by carefully inspecting network weights and activations. However, these approaches require considerable manual effort and still fall short of providing complete, faithful descriptions of the underlying algorithms. In this work, we introduce a procedure for training Transformers that are mechanistically interpretable by design. We build on RASP [Weiss et al., 2021], a programming language that can be compiled into Transformer weights. Instead of compiling human-written programs into Transformers, we design a modified Transformer that can be trained using gradient-based optimization and then be automatically converted into a discrete, human-readable program. We refer to these models as Transformer Programs. To validate our approach, we learn Transformer Programs for a variety of problems, including an in-context learning task, a suite of algorithmic problems (e.g. sorting, recognizing Dyck-languages), and NLP tasks including named entity recognition and text classification. The Transformer Programs can automatically find reasonable solutions, performing on par with standard Transformers of comparable size; and, more importantly, they are easy to interpret. To demonstrate these advantages, we convert Transformers into Python programs and use off-the-shelf code analysis tools to debug model errors and identify the ``circuits'' used to solve different sub-problems. We hope that Transformer Programs open a new path toward the goal of intrinsically interpretable machine learning.Comment: Our code, and example Transformer Programs, are available at https://github.com/princeton-nlp/TransformerProgram

    Structured Pruning Learns Compact and Accurate Models

    Full text link
    The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi (Coarse- and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10x speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.Comment: Accepted to ACL 2022; The code and models are available at https://github.com/princeton-nlp/CoFiPrunin

    Enabling Large Language Models to Generate Text with Citations

    Full text link
    Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions -- fluency, correctness, and citation quality -- and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement -- For example, on the ELI5 dataset, even the best models lack complete citation support 50% of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.Comment: Accepted by EMNLP 2023. Code and data are available at https://github.com/princeton-nlp/ALC

    Poisoning Retrieval Corpora by Injecting Adversarial Passages

    Full text link
    Dense retrievers have achieved state-of-the-art performance in various information retrieval tasks, but to what extent can they be safely deployed in real-world applications? In this work, we propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages by perturbing discrete tokens to maximize similarity with a provided set of training queries. When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems to retrieve them for queries that were not seen by the attacker. More surprisingly, these adversarial passages can directly generalize to out-of-domain queries and corpora with a high success attack rate -- for instance, we find that 50 generated passages optimized on Natural Questions can mislead >94% of questions posed in financial documents or online forums. We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised. Although different systems exhibit varying levels of vulnerability, we show they can all be successfully attacked by injecting up to 500 passages, a small fraction compared to a retrieval corpus of millions of passages.Comment: EMNLP 2023. Our code is available at https://github.com/princeton-nlp/corpus-poisonin
    • …
    corecore