7 research outputs found

    Sabi\'a: Portuguese Large Language Models

    Full text link
    As the capabilities of language models continue to advance, it is conceivable that "one-size-fits-all" model will remain as the main paradigm. For instance, given the vast number of languages worldwide, many of which are low-resource, the prevalent practice is to pretrain a single model on multiple languages. In this paper, we add to the growing body of evidence that challenges this practice, demonstrating that monolingual pretraining on the target language significantly improves models already extensively trained on diverse corpora. More specifically, we further pretrain GPT-J and LLaMA models on Portuguese texts using 3% or less of their original pretraining budget. Few-shot evaluations on Poeta, a suite of 14 Portuguese datasets, reveal that our models outperform English-centric and multilingual counterparts by a significant margin. Our best model, Sabi\'a-65B, performs on par with GPT-3.5-turbo. By evaluating on datasets originally conceived in the target language as well as translated ones, we study the contributions of language-specific pretraining in terms of 1) capturing linguistic nuances and structures inherent to the target language, and 2) enriching the model's knowledge about a domain or culture. Our results indicate that the majority of the benefits stem from the domain-specific knowledge acquired through monolingual pretraining

    In Defense of Cross-Encoders for Zero-Shot Retrieval

    Full text link
    Bi-encoders and cross-encoders are widely used in many state-of-the-art retrieval pipelines. In this work we study the generalization ability of these two types of architectures on a wide range of parameter count on both in-domain and out-of-domain scenarios. We find that the number of parameters and early query-document interactions of cross-encoders play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that cross-encoders largely outperform bi-encoders of similar size in several tasks. In the BEIR benchmark, our largest cross-encoder surpasses a state-of-the-art bi-encoder by more than 4 average points. Finally, we show that using bi-encoders as first-stage retrievers provides no gains in comparison to a simpler retriever such as BM25 on out-of-domain tasks. The code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.gitComment: arXiv admin note: substantial text overlap with arXiv:2206.0287

    InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval

    Full text link
    Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tp

    No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval

    Full text link
    Recent work has shown that small distilled language models are strong competitors to models that are orders of magnitude larger and slower in a wide range of information retrieval tasks. This has made distilled and dense models, due to latency constraints, the go-to choice for deployment in real-world retrieval applications. In this work, we question this practice by showing that the number of parameters and early query-document interaction play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that rerankers largely outperform dense ones of similar size in several tasks. Our largest reranker reaches the state of the art in 12 of the 18 datasets of the Benchmark-IR (BEIR) and surpasses the previous state of the art by 3 average points. Finally, we confirm that in-domain effectiveness is not a good indicator of zero-shot effectiveness. Code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.gi

    A Multi-label Classification System to Distinguish among Fake, Satirical, Objective and Legitimate News in Brazilian Portuguese

    Get PDF
    Currently, there has been a significant increase in the diffusion of fake news worldwide, especially the political class, where the possible misinformation that can be propagated, appearing at the elections debates around the world. However, news with a recreational purpose, such as satirical news, is often confused with objective fake news. In this work, we decided to address the differences between objectivity and legitimacy of news documents, where each article is treated as belonging to two conceptual classes: objective/satirical and legitimate/fake. Therefore, we propose a DSS (Decision Support System) based on a Text Mining (TM) pipeline with a set of novel textual features using multi-label methods for classifying news articles on these two domains. For this, a set of multi-label methods was evaluated with a combination of different base classifiers and then compared with a multi-class approach. Also, a set of real-life news data was collected from several Brazilian news portals for these experiments. Results obtained reported our DSS as adequate (0.80 f1-score) when addressing the scenario of misleading news, challenging the multi-label perspective, where the multi-class methods (0.01 f1-score) overcome by the proposed method. Moreover, it was analyzed how each stylometric features group used in the experiments influences the result aiming to discover if a particular group is more relevant than others. As a result, it was noted that the complexity group of features could be more relevant than others

    Walkability variables: an empirical study in Rolândia - PR, Brazil

    No full text
    The built environment possessed determinants of more active lifestyles, related to social and cultural reality. Thus, relevant walkability variables in large cities and in developed countries may not be suitable for mid-sized Brazilian towns. Therefore, from a case study, the objective of this research was to evaluate the relevance of eight objective walkability variables: Residential Density; Retail Floor Area Ratio; Mixed Land Use (Entropy); Space Syntax - Integration and Choice; Land and Real Estate values in a case study of a mid-sized Brazilian town. From the geocoding of data from the Municipal Urban Mobility Plan, urban form variables were aggregated and tested in 1000 meter network buffers. Analyzes were performed using a machine learning approach, through the Random Forest algorithm, in relation to self-reported walking (meters walked per unit of area). Results indicated that the most relevant characteristics were: Entropy, Integration within a 2000 meter radius and Residential Density. Contributions include the possibility of subsidizing urban planning policies in adopting an evidence-based approach.The built environment is a key determinant of physically active lifestyles. Notwithstanding, as social reality and physical activity are connected (BAUMAN et al., 2012), relevant walkability constructs for larger cities and high-income countries may not be suited for Brazilian cities.  Therefore, the main objective of this research is to evaluate the relevance of individual walkability-built environment features in mid-size Brazilian cities. From the systematizing of spatial data and a subjective database from the Urban Mobility Plan (n=756) of a case study, eight different walkability-related urban form features were aggregated in 1000 meters street network buffers and tested. Walkability features were analyzed through a machine learning approach, utilizing the Random Forest Algorithm, with self-reported walking (meters walked per area unit). Results indicate that the most relevant walkability features were: Entropy (FI= 0.609), Integration at a 2000-meter radius (FI=0.136) and Residential Density (FI=0.060). These findings are of great implication to the operationalization of walkability in Brazilian cities, indicating that more traditional walkability models might not be ideal. Implications of these findings include informing local urban policies to adopt an evidence-based, contextually-tailored approach

    NEOTROPICAL CARNIVORES: a data set on carnivore distribution in the Neotropics

    No full text
    Mammalian carnivores are considered a key group in maintaining ecological health and can indicate potential ecological integrity in landscapes where they occur. Carnivores also hold high conservation value and their habitat requirements can guide management and conservation plans. The order Carnivora has 84 species from 8 families in the Neotropical region: Canidae; Felidae; Mephitidae; Mustelidae; Otariidae; Phocidae; Procyonidae; and Ursidae. Herein, we include published and unpublished data on native terrestrial Neotropical carnivores (Canidae; Felidae; Mephitidae; Mustelidae; Procyonidae; and Ursidae). NEOTROPICAL CARNIVORES is a publicly available data set that includes 99,605 data entries from 35,511 unique georeferenced coordinates. Detection/non-detection and quantitative data were obtained from 1818 to 2018 by researchers, governmental agencies, non-governmental organizations, and private consultants. Data were collected using several methods including camera trapping, museum collections, roadkill, line transect, and opportunistic records. Literature (peer-reviewed and grey literature) from Portuguese, Spanish and English were incorporated in this compilation. Most of the data set consists of detection data entries (n = 79,343; 79.7%) but also includes non-detection data (n = 20,262; 20.3%). Of those, 43.3% also include count data (n = 43,151). The information available in NEOTROPICAL CARNIVORES will contribute to macroecological, ecological, and conservation questions in multiple spatio-temporal perspectives. As carnivores play key roles in trophic interactions, a better understanding of their distribution and habitat requirements are essential to establish conservation management plans and safeguard the future ecological health of Neotropical ecosystems. Our data paper, combined with other large-scale data sets, has great potential to clarify species distribution and related ecological processes within the Neotropics. There are no copyright restrictions and no restriction for using data from this data paper, as long as the data paper is cited as the source of the information used. We also request that users inform us of how they intend to use the data
    corecore