26 research outputs found

    Dolphin: A Challenging and Diverse Benchmark for Arabic NLG

    Full text link
    We present Dolphin, a novel benchmark that addresses the need for an evaluation framework for the wide collection of Arabic languages and varieties. The proposed benchmark encompasses a broad range of 13 different NLG tasks, including text summarization, machine translation, question answering, and dialogue generation, among others. Dolphin comprises a substantial corpus of 40 diverse and representative public datasets across 50 test splits, carefully curated to reflect real-world scenarios and the linguistic richness of Arabic. It sets a new standard for evaluating the performance and generalization capabilities of Arabic and multilingual models, promising to enable researchers to push the boundaries of current methodologies. We provide an extensive analysis of Dolphin, highlighting its diversity and identifying gaps in current Arabic NLG research. We also evaluate several Arabic and multilingual models on our benchmark, allowing us to set strong baselines against which researchers can compare

    IndicNLG Benchmark: Multilingual Datasets for Diverse NLG Tasks in Indic Languages

    Full text link
    Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages. In this paper, we present the IndicNLG Benchmark, a collection of datasets for benchmarking NLG for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes, news headline generation, sentence summarization, paraphrase generation and, question generation. We describe the created datasets and use them to benchmark the performance of several monolingual and multilingual baselines that leverage pre-trained sequence-to-sequence models. Our results exhibit the strong performance of multilingual language-specific pre-trained models, and the utility of models trained on our dataset for other related NLG tasks. Our dataset creation methods can be easily applied to modest-resource languages as they involve simple steps such as scraping news articles and Wikipedia infoboxes, light cleaning, and pivoting through machine translation data. To the best of our knowledge, the IndicNLG Benchmark is the first NLG benchmark for Indic languages and the most diverse multilingual NLG dataset, with approximately 8M examples across 5 tasks and 11 languages. The datasets and models are publicly available at https://ai4bharat.iitm.ac.in/indicnlg-suite.Comment: Accepted at EMNLP 202

    MyBotS Prototype on Social Media Discord with NLP

    Get PDF
    أدى النمو المستمر في التكنولوجيا والأجهزة التكنولوجية إلى تطوير الآلات للمساعدة في تسهيل الأنشطة المختلفة المتعلقة بالبشر. على سبيل المثال ، بغض النظر عن أهمية المعلومات على منصة Steam ، لا يزال المشترون أو اللاعبون يحصلون على القليل من المعلومات المتعلقة بالتطبيق. هذا غير مشجع على الرغم من أهمية المعلومات في عصر العولمة الحالي. لذلك ، من الضروري تطوير تطبيق جذاب وتفاعلي يسمح للمستخدمين بطرح الأسئلة والحصول على إجابات ، مثل chatbot ، والذي يمكن تنفيذه على وسائل التواصل الاجتماعي Discord. الذكاء الاصطناعي هو تقنية تسمح للآلات بالتفكير والقدرة على اتخاذ قراراتها الخاصة. أظهر هذا البحث أن نموذج chatbot الخاص بـ discord يوفر خدمات متنوعة بناءً على نتائج اختبار التصنيف باستخدام طريقة SVM بثلاث نوى ، وهي Linear و Polynomial و RBF. تعد بيانات الاختبار وتنبؤ قيم الدقة أكبر Liniear Kernel SVM بدقة وقيم توقع خطأ تبلغ 94٪ و 6٪.The continuous growth in technology and technological devices has led to the development of machines to help ease various human-related activities. For instance, irrespective of the importance of information on the Steam platform, buyers or players still get little information related to the application. This is not encouraging despite the importance of information in this current globalization era. Therefore, it is necessary to develop an attractive and interactive application that allows users to ask questions and get answers, such as a chatbot, which can be implemented on Discord social media. Artificial Intelligence is a technique that allows machines to think and be able to make their own decisions. This research showed that the discord chatbot prototype provides various services based on the results of classification testing using the SVM method with three kernels, namely Linear, Polynomial, and RBF. The test data and accuracy values prediction are the largest Liniear Kernel SVM with accuracy and error prediction values of 94% and 6%

    Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval

    Full text link
    Dense retrieval models have predominantly been studied for English, where models have shown great success, due to the availability of human-labeled training pairs. However, there has been limited success for multilingual retrieval so far, as training data is uneven or scarcely available across multiple languages. Synthetic training data generation is promising (e.g., InPars or Promptagator), but has been investigated only for English. Therefore, to study model capabilities across both cross-lingual and monolingual retrieval tasks, we develop SWIM-IR, a synthetic retrieval training dataset containing 33 (high to very-low resource) languages for training multilingual dense retrieval models without requiring any human supervision. To construct SWIM-IR, we propose SAP (summarize-then-ask prompting), where the large language model (LLM) generates a textual summary prior to the query generation step. SAP assists the LLM in generating informative queries in the target language. Using SWIM-IR, we explore synthetic fine-tuning of multilingual dense retrieval models and evaluate them robustly on three retrieval benchmarks: XOR-Retrieve (cross-lingual), XTREME-UP (cross-lingual) and MIRACL (monolingual). Our models, called SWIM-X, are competitive with human-supervised dense retrieval models, e.g., mContriever, finding that SWIM-IR can cheaply substitute for expensive human-labeled retrieval training data.Comment: Data released at https://github.com/google-research-datasets/swim-i

    Summarization of News Articles

    Get PDF
    Automatická sumarizace textu je důležitý úkol z oboru zpracování přirozeného jazyka s mnoha aplikacemi. V této práci se zaměřujeme na sumarizaci novinových článků. V práci představujeme nový sumarizační dataset vytvořený z článků ČTK. Na tomto datasetu jsme natrénovali některé z nejmodernějších modelů pro extraktivní sumarizaci s využitím neuronových sítí BERT a Longformer a zhodnotili je podle metrik ROUGE-N, ROUGE-L a BertScore. Z experimentů vyplývá, že nejlepší model dle BertScore je založený na předtrénovaném Longformeru (0.802), ale lze jej využít jen pokud je dopředu znám či zadán počet vět ve shrnutí. Pokud tato informace k dispozici není, nejlepším přístupem se jeví klasifikace jednotlivých vět s kontextem a pozičními metadaty pomocí předtrénovaného modelu BERT (0.79).ObhájenoAutomatic text summarization is an important NLP task with many applications. Our particular area of focus is summarization of news articles. We introduce a new Czech summarization dataset created from CNA articles. Using this dataset, we trained multiple state-of-the-art approaches for extractive summarization using the BERT and Longformer model architectures and evaluate them using ROUGE-N, ROUGE-L and BertScore. We found that a pretrained Czech Longformer is the best approach regarding BertScore (0.802), when the number of summary sentences is known. If it is unknown, we found that the best approach is sentence-wise classification with context and positional metadata using a pretrained Czech BERT (BertScore 0.79)

    MEGA: Multilingual Evaluation of Generative AI

    Full text link
    Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.Comment: EMNLP 202

    NusaCrowd: Open Source Initiative for Indonesian NLP Resources

    Full text link
    We present NusaCrowd, a collaborative initiative to collect and unify existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have brought together 137 datasets and 118 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their value is demonstrated through multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and the local languages of Indonesia. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and the local languages of Indonesia. Our work strives to advance natural language processing (NLP) research for languages that are under-represented despite being widely spoken

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains

    Eesti keele üldvaldkonna tekstide laia kattuvusega automaatne sündmusanalüüs

    Get PDF
    Seoses tekstide suuremahulise digitaliseerimisega ning digitaalse tekstiloome järjest laiema levikuga on tohutul hulgal loomuliku keele tekste muutunud ja muutumas masinloetavaks. Masinloetavus omab potentsiaali muuta tekstimassiivid inimeste jaoks lihtsamini hallatavaks, nt lubada rakendusi nagu automaatne sisukokkuvõtete tegemine ja tekstide põhjal küsimustele vastamine, ent paraku ei ulatu praegused automaatanalüüsi võimalused tekstide sisu tegeliku mõistmiseni. Oletatakse, tekstide sisu mõistvale automaatanalüüsile viib meid lähemale sündmusanalüüs – kuna paljud tekstid on narratiivse ülesehitusega, tõlgendatavad kui „sündmuste kirjeldused”, peaks tekstidest sündmuste eraldamine ja formaalsel kujul esitamine pakkuma alust mitmete „teksti mõistmist” nõudvate keeletehnoloogia rakenduste loomisel. Käesolevas väitekirjas uuritakse, kuivõrd saab eestikeelsete tekstide sündmusanalüüsi käsitleda kui avatud sündmuste hulka ja üldvaldkonna tekste hõlmavat automaatse lingvistilise analüüsi ülesannet. Probleemile lähenetakse eesti keele automaatanalüüsi kontekstis uudsest, sündmuste ajasemantikale keskenduvast perspektiivist. Töös kohandatakse eesti keelele TimeML märgendusraamistik ja luuakse raamistikule toetuv automaatne ajaväljendite tuvastaja ning ajasemantilise märgendusega (sündmusviidete, ajaväljendite ning ajaseoste märgendusega) tekstikorpus; analüüsitakse korpuse põhjal inimmärgendajate kooskõla sündmusviidete ja ajaseoste määramisel ning lõpuks uuritakse võimalusi ajasemantika-keskse sündmusanalüüsi laiendamiseks geneeriliseks sündmusanalüüsiks sündmust väljendavate keelendite samaviitelisuse lahendamise näitel. Töö pakub suuniseid tekstide ajasemantika ja sündmusstruktuuri märgenduse edasiarendamiseks tulevikus ning töös loodud keeleressurssid võimaldavad nii konkreetsete lõpp-rakenduste (nt automaatne ajaküsimustele vastamine) katsetamist kui ka automaatsete märgendustööriistade edasiarendamist.  Due to massive scale digitalisation processes and a switch from traditional means of written communication to digital written communication, vast amounts of human language texts are becoming machine-readable. Machine-readability holds a potential for easing human effort on searching and organising large text collections, allowing applications such as automatic text summarisation and question answering. However, current tools for automatic text analysis do not reach for text understanding required for making these applications generic. It is hypothesised that automatic analysis of events in texts leads us closer to the goal, as many texts can be interpreted as stories/narratives that are decomposable into events. This thesis explores event analysis as broad-coverage and general domain automatic language analysis problem in Estonian, and provides an investigation starting from time-oriented event analysis and tending towards generic event analysis. We adapt TimeML framework to Estonian, and create an automatic temporal expression tagger and a news corpus manually annotated for temporal semantics (event mentions, temporal expressions, and temporal relations) for the language; we analyse consistency of human annotation of event mentions and temporal relations, and, finally, provide a preliminary study on event coreference resolution in Estonian news. The current work also makes suggestions on how future research can improve Estonian event and temporal semantic annotation, and the language resources developed in this work will allow future experimentation with end-user applications (such as automatic answering of temporal questions) as well as provide a basis for developing automatic semantic analysis tools
    corecore