14 research outputs found

    Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval

    Full text link
    State-of-the-art neural (re)rankers are notoriously data hungry which - given the lack of large-scale training data in languages other than English - makes them rarely used in multilingual and cross-lingual retrieval settings. Current approaches therefore typically transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all the parameters of a pretrained massively multilingual Transformer (MMT, e.g., multilingual BERT) on English relevance judgments and then deploy it in the target language. In this work, we show that two parameter-efficient approaches to cross-lingual transfer, namely Sparse Fine-Tuning Masks (SFTMs) and Adapters, allow for a more lightweight and more effective zero-shot transfer to multilingual and cross-lingual retrieval tasks. We first train language adapters (or SFTMs) via Masked Language Modelling and then train retrieval (i.e., reranking) adapters (SFTMs) on top while keeping all other parameters fixed. At inference, this modular design allows us to compose the ranker by applying the task adapter (or SFTM) trained with source language data together with the language adapter (or SFTM) of a target language. Besides improved transfer performance, these two approaches offer faster ranker training, with only a fraction of parameters being updated compared to full MMT fine-tuning. We benchmark our models on the CLEF-2003 benchmark, showing that our parameter-efficient methods outperform standard zero-shot transfer with full MMT fine-tuning, while enabling modularity and reducing training times. Further, we show on the example of Swahili and Somali that, for low(er)-resource languages, our parameter-efficient neural re-rankers can improve the ranking of the competitive machine translation-based ranker

    End-to-End Multilingual Information Retrieval with Massively Large Synthetic Datasets

    Get PDF
    End-to-end neural networks have revolutionized various fields of artificial intelligence. However, advancements in the field of Cross-Lingual Information Retrieval (CLIR) have been stalled due to the lack of large-scale labeled data. CLIR is a retrieval task in which search queries and candidate documents are in different languages. CLIR can be very useful in some scenarios: for example, a reporter may want to search foreign-language news to obtain different perspectives for her story; an inventor may explore the patents in another country to understand prior art. This dissertation addresses the bottleneck in end-to-end neural CLIR research by synthesizing large-scale CLIR training data and examining techniques that can exploit this in various CLIR tasks. We publicly release the Large-Scale CLIR dataset and CLIRMatrix, two synthetic CLIR datasets covering a large variety of language directions. We explore and evaluate several neural architectures for end-to-end CLIR modeling. Results show that multilingual information retrieval systems trained on these synthetic CLIR datasets are helpful for many language pairs, especially those in low-resource settings. We further show how these systems can be adapted to real-world scenarios

    On the Mono- and Cross-Language Detection of Text Re-Use and Plagiarism

    Full text link
    Barrón Cedeño, LA. (2012). On the Mono- and Cross-Language Detection of Text Re-Use and Plagiarism [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16012Palanci

    EveTAR: Building a Large-Scale Multi-Task Test Collection over Arabic Tweets

    Full text link
    This article introduces a new language-independent approach for creating a large-scale high-quality test collection of tweets that supports multiple information retrieval (IR) tasks without running a shared-task campaign. The adopted approach (demonstrated over Arabic tweets) designs the collection around significant (i.e., popular) events, which enables the development of topics that represent frequent information needs of Twitter users for which rich content exists. That inherently facilitates the support of multiple tasks that generally revolve around events, namely event detection, ad-hoc search, timeline generation, and real-time summarization. The key highlights of the approach include diversifying the judgment pool via interactive search and multiple manually-crafted queries per topic, collecting high-quality annotations via crowd-workers for relevancy and in-house annotators for novelty, filtering out low-agreement topics and inaccessible tweets, and providing multiple subsets of the collection for better availability. Applying our methodology on Arabic tweets resulted in EveTAR , the first freely-available tweet test collection for multiple IR tasks. EveTAR includes a crawl of 355M Arabic tweets and covers 50 significant events for which about 62K tweets were judged with substantial average inter-annotator agreement (Kappa value of 0.71). We demonstrate the usability of EveTAR by evaluating existing algorithms in the respective tasks. Results indicate that the new collection can support reliable ranking of IR systems that is comparable to similar TREC collections, while providing strong baseline results for future studies over Arabic tweets

    South African isiZulu and siSwati news corpus creation, annotation and categorisation

    Get PDF
    Mini Dissertation (MIT (Big Data Science))--University of Pretoria, 2022.South Africa has eleven official languages and amongst the eleven languages only 9 languages are local low-resourced languages. As a result, it is essential to build the resources for these languages so that they can benefit from advances in the field of natural language processing. In this project, the focus was to create annotated datasets for the isiZulu and siSwati local languages based on news topic classification tasks and present the findings from these baseline classification models. Due to the shortage of data for these local South African languages, the datasets that were created were augmented and oversampled to increase data size and overcome class classification imbalance. In total, four different classification models were used namely Logistic regression, Naive bayes, XGBoost and LSTM. These models were trained on three different word embeddings namely Count vectorizer, TFIDF vectorizer and word2vec. The results of this study showed that XGBoost, Logistic regression and LSTM, trained from word2vec performed better than the other combinations.Computer ScienceMIT (Big Data Science)Unrestricte

    Ant Spider Bee: Chronicling Digital Transformations in Environmental Humanities

    Get PDF

    The Object of Platform Studies: Relational Materialities and the Social Platform (the case of the Nintendo Wii)

    Get PDF
    Racing the Beam: The Atari Video Computer System,by Ian Bogost and Nick Montfort, inaugurated thePlatform Studies series at MIT Press in 2009.We’ve coauthored a new book in the series, Codename: Revolution: the Nintendo Wii Video Game Console. Platform studies is a quintessentially Digital Humanities approach, since it’s explicitly focused on the interrelationship of computing and cultural expression. According to the series preface, the goal of platform studies is “to consider the lowest level of computing systems and to understand how these systems relate to culture and creativity.”In practice, this involves paying close attentionto specific hardware and software interactions--to the vertical relationships between a platform’s multilayered materialities (Hayles; Kirschenbaum),from transistors to code to cultural reception. Any given act of platform-studies analysis may focus for example on the relationship between the chipset and the OS, or between the graphics processor and display parameters or game developers’ designs.In computing terms, platform is an abstraction(Bogost and Montfort), a pragmatic frame placed around whatever hardware-and-software configuration is required in order to build or run certain specificapplications (including creative works). The object of platform studies is thus a shifting series of possibility spaces, any number of dynamic thresholds between discrete levels of a system

    IberSPEECH 2020: XI Jornadas en TecnologĂ­a del Habla and VII Iberian SLTech

    Get PDF
    IberSPEECH2020 is a two-day event, bringing together the best researchers and practitioners in speech and language technologies in Iberian languages to promote interaction and discussion. The organizing committee has planned a wide variety of scientific and social activities, including technical paper presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, discussion panels, a round table, and awards to the best thesis and papers. The program of IberSPEECH2020 includes a total of 32 contributions that will be presented distributed among 5 oral sessions, a PhD session, and a projects session. To ensure the quality of all the contributions, each submitted paper was reviewed by three members of the scientific review committee. All the papers in the conference will be accessible through the International Speech Communication Association (ISCA) Online Archive. Paper selection was based on the scores and comments provided by the scientific review committee, which includes 73 researchers from different institutions (mainly from Spain and Portugal, but also from France, Germany, Brazil, Iran, Greece, Hungary, Czech Republic, Ucrania, Slovenia). Furthermore, it is confirmed to publish an extension of selected papers as a special issue of the Journal of Applied Sciences, “IberSPEECH 2020: Speech and Language Technologies for Iberian Languages”, published by MDPI with fully open access. In addition to regular paper sessions, the IberSPEECH2020 scientific program features the following activities: the ALBAYZIN evaluation challenge session.Red Española de Tecnologías del Habla. Universidad de Valladoli

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
    corecore