5,142 research outputs found

    Named Entity Recognition as Dependency Parsing

    Get PDF
    Named Entity Recognition (NER) is a fundamental task in Natural Language Processing, concerned with identifying spans of text expressing references to entities. NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009). In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017). The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately. We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points

    An attentive neural architecture for joint segmentation and parsing and its application to real estate ads

    Get PDF
    In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are (i) segmentation of the plain text into meaningful subunits (e.g., entities), and (ii) dependency parsing, to establish relations between subunits. In this paper, we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of (1) identifying important entities of a property (e.g., rooms) from classifieds and (2) structuring them into a tree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks. For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed joint model, reporting an improvement of over three percentage points in the overall edge F1 score of the property tree. Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F1 score for our application.Comment: Preprint - Accepted for publication in Expert Systems with Application

    Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT

    Full text link
    Pretrained contextual representation models (Peters et al., 2018; Devlin et al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new release of BERT (Devlin, 2018) includes a model simultaneously pretrained on 104 languages with impressive performance for zero-shot cross-lingual transfer on a natural language inference task. This paper explores the broader cross-lingual potential of mBERT (multilingual) as a zero shot language transfer model on 5 NLP tasks covering a total of 39 languages from various language families: NLI, document classification, NER, POS tagging, and dependency parsing. We compare mBERT with the best-published methods for zero-shot cross-lingual transfer and find mBERT competitive on each task. Additionally, we investigate the most effective strategy for utilizing mBERT in this manner, determine to what extent mBERT generalizes away from language specific features, and measure factors that influence cross-lingual transfer.Comment: EMNLP 2019 Camera Read

    The Comparative Evaluation of Dependency Parsers in Parsing Estonian

    Get PDF
    Loomuliku keele töötluse (LKT) tehnoloogia on pidevalt arenemas, viimastel kümnenditel on selles valdkonnas toimunud väga suured edasiminekud. Üks LKT põhiülesanne on sõltuvussüntaksi analüüs, mis on sageli aluseks ka paljudele teistele ülesannetele, näiteks masintõlkele, nimeolemite tuvastamisele jne. Sõltuvussüntaksi analüüsi eesmärgiks on leida lause süntaktiline struktuur ja tuvastada sõnadevahelised grammatilised seosed. Enamik sõltuvussüntaksi analüüsi uuringuid on keskendunud inglise keele analüüsimisele. Antud ma-gistritöö eesmärgiks on hinnata ja võrrelda erinevate süntaksianalüsaatorite tulemuslikkust eesti keele analüüsimisel. Võrdlusesse valitud sõltuvussüntaksi analüsaatorid on: MaltParser, spaCy, Stanford’i neuroanalüsaator (nndep), SyntaxNet ja UDPipe. Hindamiseks kasutati peamiselt märgendatud seoste täpsust (Labelled Attachment Score), märgendamata seoste täpsust (Unlabelled Attachment Score) ning märgenduse täpsust (Label Accuracy). Magistritöö käigus treeniti spaCy, Stanfordi neuroparseri ning UDParseri mudelid eesti keele süntaksi analüüsimiseks, MaltParseri ja SyntaksNet’i jaoks kasutati eksperimentides olemasolevaid eeltreenitud mudeleid.Natural Language Processing (NLP) technology has been constantly developing and has seen a vast improvement in the last couple of decades. One key task in NLP is dependency parsing that oftentimes is a prerequisite for many other tasks such as machine translation, Named Entity Recognition (NER) and so on. The idea of dependency parsing is to perform a syntactic analysis of a sentence and extract the grammatical relations among the words in that sentence. Most research on dependency parsing has been focusing on English text parsing. In this thesis, an effort has been made to evaluate and compare the performance of some of the state-of-the-art dependency parsers in parsing Estonian. The dependency parsers chosen for evaluation are: MaltParser, spaCy, Stanford neural network dependency parser (nndep), SyntaxNet and UDPipe. The comparison is done using mainly Labelled Attachment Score (LAS), Unlabelled Attachment Score (UAS) and Label Accuracy (LA). New models for Estonian were trained for the spaCy, Stanford nndep and UDPipe parsers while pretrained models for the MaltParser and SyntaxNet were used in the experiments
    corecore