9 research outputs found

    What Matters for Neural Cross-Lingual Named Entity Recognition: An Empirical Analysis

    Full text link
    Building named entity recognition (NER) models for languages that do not have much training data is a challenging task. While recent work has shown promising results on cross-lingual transfer from high-resource languages to low-resource languages, it is unclear what knowledge is transferred. In this paper, we first propose a simple and efficient neural architecture for cross-lingual NER. Experiments show that our model achieves competitive performance with the state-of-the-art. We further analyze how transfer learning works for cross-lingual NER on two transferable factors: sequential order and multilingual embeddings, and investigate how model performance varies across entity lengths. Finally, we conduct a case-study on a non-Latin language, Bengali, which suggests that leveraging knowledge from Wikipedia will be a promising direction to further improve the model performances. Our results can shed light on future research for improving cross-lingual NER.Comment: 7 page

    Аналитический обзор методов решения проблемы малых наборов данных при создании систем автоматического распознавания речи для малоресурсных языков

    Get PDF
    В статье рассматриваются основные методы решения проблемы малых наборов обучающих данных для создания автоматических систем распознавания речи для так называемых малоресурсных языков. Рассматривается понятие малоресурсных языков и формулируется рабочая дефиниция на основании ряда работ по этой тематике. Определены основные трудности, связанные с применением классических схем автоматического распознавания речи к материалу малоресурсных языков, и очерчен круг основных методов, использующихся для решения обозначенных проблем. В статье подробно рассматриваются методы аугментации данных, переноса знаний и сбора речевого материала. В зависимости от конкретной задачи, выделяются методы аугментации аудиоматериала и текстовых данных, переноса знаний и мультизадачного обучения. Отдельный раздел статьи посвящен существующему информационному обеспечению, базам данных и основным принципам их организации с точки зрения работы с малоресурсными языками. Делаются выводы об оправданности методов аугментации данных и переноса знаний для языков с минимальным информационным обеспечением. В случае полного отсутствия данных для конкретного языка и родительских моделей структурно схожих языков предпочтительным вариантом является сбор новой базы данных, в том числе, при помощи краудсорсинга. Многозадачные модели переноса знаний оказываются эффективными в том случае, если исследователь располагает набольшими наборами данных. Если доступны данные по языку с достаточными ресурсами, предпочтительной является работа с языковой парой. Сделанные в результате данного обзора выводы в дальнейшем предполагается применить при работе с малоресурсным карельским языком, для которого авторы статьи создают систему автоматического распознавания речи

    Аналитический обзор методов решения проблемы малых наборов данных при создании систем автоматического распознавания речи для малоресурсных языков

    Get PDF
    In this paper, principal methods for solving training data issues for the so-called low-resource languages are discussed, regarding elaboration of automatic speech recognition systems. The notion of low-resource languages is studied and a working definition is coined on the basis of a number of papers on this topic. The main difficulties associated with the application of classical approaches to automatic speech recognition to the material of low-resource languages are determined, and the principal methods used to solve these problems are outlined. The paper discusses the methods for data augmentation, transfer learning and collection of new language data in detail. Depending on specific tasks, methods for audio material and text data augmentation, transfer learning and multi-task learning are distinguished. In Section 4 of the paper the current information support methods, databases and the basic principles of their architecture are discussed with regard to low-resource languages. Conclusions are drawn about the justification of augmentation and knowledge transfer methods for languages with low information support. In the case of unavailability of language data or structurally similar parent models, the preferred option is to collect a new database, including the crowdsourcing technique. Multilanguage learning models are effective for small datasets. If big language data are available, the most efficient method is transfer learning within a language pair. The conclusions made in the course of this this review will be applied to the data of the low-resource Karelian language, for which an automatic speech recognition system has been being created by the authors of this paper since the beginning of the year 2022.В статье рассматриваются основные методы решения проблемы малых наборов обучающих данных для создания автоматических систем распознавания речи для так называемых малоресурсных языков. Рассматривается понятие малоресурсных языков и формулируется рабочая дефиниция на основании ряда работ по этой тематике. Определены основные трудности, связанные с применением классических схем автоматического распознавания речи к материалу малоресурсных языков, и очерчен круг основных методов, использующихся для решения обозначенных проблем. В статье подробно рассматриваются методы аугментации данных, переноса знаний и сбора речевого материала. В зависимости от конкретной задачи, выделяются методы аугментации аудиоматериала и текстовых данных, переноса знаний и мультизадачного обучения. Отдельный раздел статьи посвящен существующему информационному обеспечению, базам данных и основным принципам их организации с точки зрения работы с малоресурсными языками. Делаются выводы об оправданности методов аугментации данных и переноса знаний для языков с минимальным информационным обеспечением. В случае полного отсутствия данных для конкретного языка и родительских моделей структурно схожих языков предпочтительным вариантом является сбор новой базы данных, в том числе, при помощи краудсорсинга. Многозадачные модели переноса знаний оказываются эффективными в том случае, если исследователь располагает набольшими наборами данных. Если доступны данные по языку с достаточными ресурсами, предпочтительной является работа с языковой парой. Сделанные в результате данного обзора выводы в дальнейшем предполагается применить при работе с малоресурсным карельским языком, для которого авторы статьи создают систему автоматического распознавания речи

    Contextualising Levels of Language Resourcedness affecting Digital Processing of Text

    Get PDF
    Application domains such as digital humanities and tool like chatbots involve some form of processing natural language, from digitising hardcopies to speech generation. The language of the content is typically characterised as either a low resource language (LRL) or high resource language (HRL), also known as resource-scarce and well-resourced languages, respectively. African languages have been characterized as resource-scarce languages (Bosch et al. 2007; Pretorius & Bosch 2003; Keet & Khumalo 2014) and English is by far the most well-resourced language. Varied language resources are used to develop software systems for these languages to accomplish a wide range of tasks. In this paper we argue that the dichotomous typology LRL and HRL for all languages is problematic. Through a clear understanding of language resources situated in a society, a matrix is developed that characterizes languages as Very LRL, LRL, RL, HRL and Very HRL. The characterization is based on the typology of contextual features for each category, rather than counting tools, and motivation is provided for each feature and each characterization. The contextualisation of resourcedness, with a focus on African languages in this paper, and an increased understanding of where on the scale the language used in a project is, may assist in, among others, better planning of research and implementation projects. We thus argue in this paper that the characterization of language resources within a given scale in a project is an indispensable component particularly in the context of low-resourced languages

    Responsible media technology and AI: challenges and research directions

    Get PDF
    The last two decades have witnessed major disruptions to the traditional media industry as a result of technological breakthroughs. New opportunities and challenges continue to arise, most recently as a result of the rapid advance and adoption of artificial intelligence technologies. On the one hand, the broad adoption of these technologies may introduce new opportunities for diversifying media offerings, fighting disinformation, and advancing data-driven journalism. On the other hand, techniques such as algorithmic content selection and user personalization can introduce risks and societal threats. The challenge of balancing these opportunities and benefits against their potential for negative impacts underscores the need for more research in responsible media technology. In this paper, we first describe the major challenges—both for societies and the media industry—that come with modern media technology. We then outline various places in the media production and dissemination chain, where research gaps exist, where better technical approaches are needed, and where technology must be designed in a way that can effectively support responsible editorial processes and principles. We argue that a comprehensive approach to research in responsible media technology, leveraging an interdisciplinary approach and a close cooperation between the media industry and academic institutions, is urgently needed

    Participatory research for low-resourced machine translation : a case study in African languages

    Get PDF
    The paper demonstrates the feasibility and scalability of participatory research, with a case study on Machine Translation (MT) for African languages. The study implementation will lead to a collection of novel translation datasets, MT benchmarks for over 30 languages, with human evaluations for a third of them, while also enabling participants without formal training to make a unique scientific contribution. Benchmarks, models, data, code, and evaluation results are released at https://github.com/masakhane-io/masakhane-mtGoogle Clou

    Automatic information search for countering covid-19 misinformation through semantic similarity

    Full text link
    Trabajo Fin de Máster en Bioinformática y Biología ComputacionalInformation quality in social media is an increasingly important issue and misinformation problem has become even more critical in the current COVID-19 pandemic, leading people exposed to false and potentially harmful claims and rumours. Civil society organizations, such as the World Health Organization, have demanded a global call for action to promote access to health information and mitigate harm from health misinformation. Consequently, this project pursues countering the spread of COVID-19 infodemic and its potential health hazards. In this work, we give an overall view of models and methods that have been employed in the NLP field from its foundations to the latest state-of-the-art approaches. Focusing on deep learning methods, we propose applying multilingual Transformer models based on siamese networks, also called bi-encoders, combined with ensemble and PCA dimensionality reduction techniques. The goal is to counter COVID-19 misinformation by analyzing the semantic similarity between a claim and tweets from a collection gathered from official fact-checkers verified by the International Fact-Checking Network of the Poynter Institute. It is factual that the number of Internet users increases every year and the language spoken determines access to information online. For this reason, we give a special effort in the application of multilingual models to tackle misinformation across the globe. Regarding semantic similarity, we firstly evaluate these multilingual ensemble models and improve the result in the STS-Benchmark compared to monolingual and single models. Secondly, we enhance the interpretability of the models’ performance through the SentEval toolkit. Lastly, we compare these models’ performance against biomedical models in TREC-COVID task round 1 using the BM25 Okapi ranking method as the baseline. Moreover, we are interested in understanding the ins and outs of misinformation. For that purpose, we extend interpretability using machine learning and deep learning approaches for sentiment analysis and topic modelling. Finally, we developed a dashboard to ease visualization of the results. In our view, the results obtained in this project constitute an excellent initial step toward incorporating multilingualism and will assist researchers and people in countering COVID-19 misinformation
    corecore