2 research outputs found

    MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition

    Get PDF
    African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages

    AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages

    Get PDF
    African languages have far less in-language content available digitally, making it challenging for question-answering systems to satisfy the information needs of users. Cross-lingual open-retrieval question answering (XOR QA) systems-those that retrieve answer content from other languages while serving people in their native language-offer a means of filling this gap. To this end, we create AFRIQA, the first cross-lingual QA dataset with a focus on African languages. AFRIQA includes 12,000+ XOR QA examples across 10 African languages. While previous datasets have focused primarily on languages where crosslingual QA augments coverage from the target language, AFRIQA focuses on languages where cross-lingual answer content is the only high-coverage source of answer content. Because of this, we argue that African languages are one of the most important and realistic use cases for XOR QA. Our experiments demonstrate the poor performance of automatic translation and multilingual retrieval methods. Overall, AFRIQA proves challenging for state-of-the-art QA models. We hope that the dataset enables the development of more equitable QA technology
    corecore