38 research outputs found

    Efficient Lagrangian relaxation algorithms for exact inference in natural language tasks

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 95-99).For many tasks in natural language processing, finding the best solution requires a search over a large set of possible structures. Solving these combinatorial search problems exactly can be inefficient, and so researchers often use approximate techniques at the cost of model accuracy. In this thesis, we turn to Lagrangian relaxation as an alternative to approximate inference in natural language tasks. We demonstrate that Lagrangian relaxation algorithms provide efficient solutions while still maintaining formal guarantees. The approach leads to inference algorithms with the following properties: " The resulting algorithms are simple and efficient, building on standard combinatorial algorithms for relaxed problems. " The algorithms provably solve a linear programming (LP) relaxation of the original inference problem. " Empirically, the relaxation often leads to an exact solution to the original problem. We develop Lagrangian relaxation algorithms for several important tasks in natural language processing including higher-order non-projective dependency parsing, syntactic machine translation, integrated constituency and dependency parsing, and part-of-speech tagging with inter-sentence constraints. For each of these tasks, we show that the Lagrangian relaxation algorithms are often significantly faster than exact methods while finding the exact solution with a certificate of optimality in the vast majority of examples.by Alexander M. Rush.S.M

    Sentence-level grammatical error identification as sequence-to-sequence correction

    Get PDF
    We demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder models can be used for the generation of corrections, in addition to error identification, which is of interest for certain end-user applications. We show that a character-based encoder-decoder model is particularly effective, outperforming other results on the AESW Shared Task on its own, and showing gains over a word-based counterpart. Our final model— a combination of three character-based encoder-decoder models, one word-based encoder-decoder model, and a sentence-level CNN—is the highest performing system on the AESW 2016 binary prediction Shared Task.Engineering and Applied Science

    Frame-semantic parsing

    Get PDF
    Frame semantics is a linguistic theory that has been instantiated for English in the FrameNet lexicon. We solve the problem of frame-semantic parsing using a two-stage statistical model that takes lexical targets (i.e., content words and phrases) in their sentential contexts and predicts frame-semantic structures. Given a target in context, the first stage disambiguates it to a semantic frame. This model uses latent variables and semi-supervised learning to improve frame disambiguation for targets unseen at training time. The second stage finds the target's locally expressed semantic arguments. At inference time, a fast exact dual decomposition algorithm collectively predicts all the arguments of a frame at once in order to respect declaratively stated linguistic constraints, resulting in qualitatively better structures than naïve local predictors. Both components are feature-based and discriminatively trained on a small set of annotated frame-semantic parses. On the SemEval 2007 benchmark data set, the approach, along with a heuristic identifier of frame-evoking targets, outperforms the prior state of the art by significant margins. Additionally, we present experiments on the much larger FrameNet 1.5 data set. We have released our frame-semantic parser as open-source software.United States. Defense Advanced Research Projects Agency (DARPA grant NBCH-1080004)National Science Foundation (U.S.) (NSF grant IIS-0836431)National Science Foundation (U.S.) (NSF grant IIS-0915187)Qatar National Research Fund (NPRP 08-485-1-083

    Roadmap on Photovoltaic Absorber Materials for Sustainable Energy Conversion

    Full text link
    Photovoltaics (PVs) are a critical technology for curbing growing levels of anthropogenic greenhouse gas emissions, and meeting increases in future demand for low-carbon electricity. In order to fulfil ambitions for net-zero carbon dioxide equivalent (CO2eq) emissions worldwide, the global cumulative capacity of solar PVs must increase by an order of magnitude from 0.9 TWp in 2021 to 8.5 TWp by 2050 according to the International Renewable Energy Agency, which is considered to be a highly conservative estimate. In 2020, the Henry Royce Institute brought together the UK PV community to discuss the critical technological and infrastructure challenges that need to be overcome to address the vast challenges in accelerating PV deployment. Herein, we examine the key developments in the global community, especially the progress made in the field since this earlier roadmap, bringing together experts primarily from the UK across the breadth of the photovoltaics community. The focus is both on the challenges in improving the efficiency, stability and levelized cost of electricity of current technologies for utility-scale PVs, as well as the fundamental questions in novel technologies that can have a significant impact on emerging markets, such as indoor PVs, space PVs, and agrivoltaics. We discuss challenges in advanced metrology and computational tools, as well as the growing synergies between PVs and solar fuels, and offer a perspective on the environmental sustainability of the PV industry. Through this roadmap, we emphasize promising pathways forward in both the short- and long-term, and for communities working on technologies across a range of maturity levels to learn from each other.Comment: 160 pages, 21 figure

    Mortality and pulmonary complications in patients undergoing surgery with perioperative SARS-CoV-2 infection: an international cohort study

    Get PDF
    Background: The impact of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on postoperative recovery needs to be understood to inform clinical decision making during and after the COVID-19 pandemic. This study reports 30-day mortality and pulmonary complication rates in patients with perioperative SARS-CoV-2 infection. Methods: This international, multicentre, cohort study at 235 hospitals in 24 countries included all patients undergoing surgery who had SARS-CoV-2 infection confirmed within 7 days before or 30 days after surgery. The primary outcome measure was 30-day postoperative mortality and was assessed in all enrolled patients. The main secondary outcome measure was pulmonary complications, defined as pneumonia, acute respiratory distress syndrome, or unexpected postoperative ventilation. Findings: This analysis includes 1128 patients who had surgery between Jan 1 and March 31, 2020, of whom 835 (74·0%) had emergency surgery and 280 (24·8%) had elective surgery. SARS-CoV-2 infection was confirmed preoperatively in 294 (26·1%) patients. 30-day mortality was 23·8% (268 of 1128). Pulmonary complications occurred in 577 (51·2%) of 1128 patients; 30-day mortality in these patients was 38·0% (219 of 577), accounting for 81·7% (219 of 268) of all deaths. In adjusted analyses, 30-day mortality was associated with male sex (odds ratio 1·75 [95% CI 1·28–2·40], p\textless0·0001), age 70 years or older versus younger than 70 years (2·30 [1·65–3·22], p\textless0·0001), American Society of Anesthesiologists grades 3–5 versus grades 1–2 (2·35 [1·57–3·53], p\textless0·0001), malignant versus benign or obstetric diagnosis (1·55 [1·01–2·39], p=0·046), emergency versus elective surgery (1·67 [1·06–2·63], p=0·026), and major versus minor surgery (1·52 [1·01–2·31], p=0·047). Interpretation: Postoperative pulmonary complications occur in half of patients with perioperative SARS-CoV-2 infection and are associated with high mortality. Thresholds for surgery during the COVID-19 pandemic should be higher than during normal practice, particularly in men aged 70 years and older. Consideration should be given for postponing non-urgent procedures and promoting non-operative treatment to delay or avoid the need for surgery. Funding: National Institute for Health Research (NIHR), Association of Coloproctology of Great Britain and Ireland, Bowel and Cancer Research, Bowel Disease Research Foundation, Association of Upper Gastrointestinal Surgeons, British Association of Surgical Oncology, British Gynaecological Cancer Society, European Society of Coloproctology, NIHR Academy, Sarcoma UK, Vascular Society for Great Britain and Ireland, and Yorkshire Cancer Research

    A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing

    No full text
    Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.United States. Defense Advanced Research Projects Agency (Machine Reading Program) (Contract FA8750-09-C-0181.

    Lagrangian relaxation for natural language decoding

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.Cataloged from PDF version of thesis.Includes bibliographical references (pages 233-241).The major success story of natural language processing over the last decade has been the development of high-accuracy statistical methods for a wide-range of language applications. The availability of large textual data sets has made it possible to employ increasingly sophisticated statistical models to improve performance on language tasks. However, oftentimes these more complex models come at the cost of expanding the search-space of the underlying decoding problem. In this dissertation, we focus on the question of how to handle this challenge. In particular, we study the question of decoding in large-scale, statistical natural language systems. We aim to develop a formal understanding of the decoding problems behind these tasks and present algorithms that extend beyond common heuristic approaches to yield optimality guarantees. The main tool we utilize, Lagrangian relaxation, is a classical idea from the field of combinatorial optimization. We begin the dissertation by giving a general background introduction to the method and describe common models in natural language processing. The body of the dissertation consists of six chapters. The first three chapters discuss relaxation methods for core natural language tasks : (1) examines the classical problem of parsing and part-of-speech tagging; (2) addresses the problem of language model composition in syntactic machine translation; (3) develops efficient algorithms for non-projective dependency parsing. The second set of chapters discuss methods that utilize relaxation in combination with other combinatorial techniques: (1) develops an exact beam-search algorithm for machine translation; (2) uses a parsing relaxation in a coarse-to-fine cascade. At the core of each chapter is a relaxation of a difficult combinatorial problem and the implementation of this algorithm in a large-scale system.by Alexander M. Rush.Ph. D

    On dual decomposition and linear programming relaxations for natural language processing

    No full text
    This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger.United States. Defense Advanced Research Projects Agency. Machine Reading ProgramUnited States. Air Force Research Laboratory (Prime contract no. FA8750-09-C-0181)United States. Defense Advanced Research Projects Agency. GALE Program (Contract No. HR0011-06-C-0022)Google (Firm
    corecore