302 research outputs found

    Reading Wikipedia to Answer Open-Domain Questions

    Full text link
    This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.Comment: ACL2017, 10 page

    Divergence of thioesterase function : human BFIT2, Escherichia coli EntH, and YDII

    Get PDF
    My doctoral research primarily focuses on two hotdog-fold thioesterases, EntH (also known as YbdB) from E. coli, and BFIT2 from Homo sapiens. The EntH (YbdB) gene is included in a large gene cluster that encodes the enzymes of the biosynthetic pathway leading to enterobactin. Building on the hypothesis that EntH might function in a house-keeping\u27 role by liberating misacylated EntB, two potential pathways to EntB misacylation were identified, one involving the phosphopantetheinyl transferase EntD and the other involving 2,3-DHB-AMP ligase EntE. EntH displays thioesterase activity towards a variety of acyl and aryl-holo EntB adducts. Lastly, It was shown that EntF acts on the 2,3-DHB-holo-EntB quickly, but not quickly on misacylated EntB adducts.tandem hotdog-fold thioesterase domains and a C-terminal steroidogenic acute regulatory protein related lipid transfer (START) domain. The expression of BFIT2 is induced during the thermogenesis transition of brown fat tissue. The expression of the recombinant BFIT2 in transfected HEK cells was confirmed by Western blot analysis. The recombinant BFIT2 contains a N-terminal His6-tag and epitope, which was found to be susceptible to posttranslational removal. The recombinant N-terminal (minus residues 1-34) truncated mutant was found not to undergo posttranslational cleavage, thus suggesting that the N-terminal region is a signal sequence. A chimeric protein BFIT2 N(1-42)-GFP was shown by confocal microscopy to co-locate with the mitochondria. The BFTI2 precursor was shown to be taken up by freshly isolated HEK cell mitochondria and cleaved to the mature form. These results confirmed that the N-terminal region of BFIT2 functions as MTS. During the thermogenesis transition of brown fat tissue, BFIT2 might function to restore the balance between free CoA and fatty acyl-CoA by hydrolyzing the long to medium chain fatty acyl-CoAs. Consistent with this hypothesis, BFIT2 was found to be much more active towards palmitoyl-CoA, myristoyl-CoA and lauroyl-CoA.\u2

    Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors

    Full text link
    Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8%

    Learning Transformer Programs

    Full text link
    Recent research in mechanistic interpretability has attempted to reverse-engineer Transformer models by carefully inspecting network weights and activations. However, these approaches require considerable manual effort and still fall short of providing complete, faithful descriptions of the underlying algorithms. In this work, we introduce a procedure for training Transformers that are mechanistically interpretable by design. We build on RASP [Weiss et al., 2021], a programming language that can be compiled into Transformer weights. Instead of compiling human-written programs into Transformers, we design a modified Transformer that can be trained using gradient-based optimization and then be automatically converted into a discrete, human-readable program. We refer to these models as Transformer Programs. To validate our approach, we learn Transformer Programs for a variety of problems, including an in-context learning task, a suite of algorithmic problems (e.g. sorting, recognizing Dyck-languages), and NLP tasks including named entity recognition and text classification. The Transformer Programs can automatically find reasonable solutions, performing on par with standard Transformers of comparable size; and, more importantly, they are easy to interpret. To demonstrate these advantages, we convert Transformers into Python programs and use off-the-shelf code analysis tools to debug model errors and identify the ``circuits'' used to solve different sub-problems. We hope that Transformer Programs open a new path toward the goal of intrinsically interpretable machine learning.Comment: Our code, and example Transformer Programs, are available at https://github.com/princeton-nlp/TransformerProgram

    Structured Pruning Learns Compact and Accurate Models

    Full text link
    The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi (Coarse- and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10x speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.Comment: Accepted to ACL 2022; The code and models are available at https://github.com/princeton-nlp/CoFiPrunin

    Emotions, crisis, and institutions: Explaining compliance with COVID-19 regulations

    Get PDF
    Amid the COVID-19 pandemic, citizens' compliance with government preventive measures was one of the top policy priorities for governments worldwide. This study engages with socio-legal and psychological theories on compliance and proposes an analytical framework to explore the role of different psychological factors on individual-level compliance during global health crises. Using the results of three national surveys, we argue that various negative emotional states, perceptions of the ongoing crisis, and of the institutional settings are major factors influencing individual compliance across countries. Most importantly, while increased panic, anxiety, and sadness lead to higher compliance, rising anger, loneliness, and impatience decrease compliance levels. Notably, perceptions of the COVID-19 crisis—especially health concerns and a worsening financial situation—tend to elicit anger among citizens across countries, thereby further hampering their obedience with pandemic regulations. Furthermore, perceptions of public institutions also influence individual compliance. Overall, in order to ensure compliance, we suggest that policymakers and those implementing government measures take individual psychological factors into account both within and beyond the public crisis context

    The Acquisition of Chinese as a third language by Japanese L1/English L2 speakers

    Get PDF
    The role of language transfer in second language acquisition has long been the focus in the study of cross-linguistic influence. Much has been written about how the learner’s existing linguistic knowledge influences the course of second language development. In the last decade, however, there have been a considerable number of books and journal articles dealing with a relatively under-explored field: the role of language transfer during third language acquisition. The question arises as to how the learner’s three languages interact with each other during the language learning process
    • …
    corecore