3 research outputs found

    Entity Projection via Machine Translation for Cross-Lingual NER

    Full text link
    Although over 100 languages are supported by strong off-the-shelf machine translation systems, only a subset of them possess large annotated corpora for named entity recognition. Motivated by this fact, we leverage machine translation to improve annotation-projection approaches to cross-lingual named entity recognition. We propose a system that improves over prior entity-projection methods by: (a) leveraging machine translation systems twice: first for translating sentences and subsequently for translating entities; (b) matching entities based on orthographic and phonetic similarity; and (c) identifying matches based on distributional statistics derived from the dataset. Our approach improves upon current state-of-the-art methods for cross-lingual named entity recognition on 5 diverse languages by an average of 4.1 points. Further, our method achieves state-of-the-art F_1 scores for Armenian, outperforming even a monolingual model trained on Armenian source data

    Cross-lingual Model Transfer Using Feature Representation Projection

    No full text
    We propose a novel approach to cross-lingual model transfer based on feature representation projection. First, a com-pact feature representation relevant for the task in question is constructed for either language independently and then the map-ping between the two representations is determined using parallel data. The tar-get instance can then be mapped into the source-side feature representation us-ing the derived mapping and handled di-rectly by the source-side model. This ap-proach displays competitive performance on model transfer for semantic role label-ing when compared to direct model trans-fer and annotation projection and suggests interesting directions for further research.
    corecore