26 research outputs found

    Unsupervised Relation Mapping: Going from Text to Schema

    Get PDF
    The schema of a database models the knowledge content of the database. However, database users often have natural language text documents, e.g., relatively unstructured data, with information related to the database. Understanding the semantics of the text documents entails the identification of entities in the document and the relations (as specified in the schema) that connect the entities. This disclosure describes techniques to find the correct relationship in the schema for a given input pair of entities. Per the techniques, two inputs are extracted from the documents - the pairs (knowledge graph entity, input string) and a set of target attributes, e.g., binary relations between entities and other entities or values that capture particular domain semantics. A list of attributes is returned, ranked by the likelihood that the attributes capture the semantics of the input string regarded as an attribute of the input knowledge graph entity

    A Shared Task of a New, Collaborative Type to Foster Reproducibility:A First Exercise in the Area of Language Science and Technology with REPROLANG2020

    Get PDF
    In this paper, we introduce a new type of shared task — which is collaborative rather than competitive — designed to support and fosterthe reproduction of research results. We also describe the first event running such a novel challenge, present the results obtained, discussthe lessons learned and ponder on future undertakings

    Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers

    Full text link
    Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extraction by encoding the paragraph only once (one-pass). We build our solution on the pre-trained self-attentive (Transformer) models, where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with an entity-aware attention technique. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.Comment: 7 page
    corecore