2 research outputs found

    Pre-trained biomedical language models for clinical NLP in Spanish

    Get PDF
    This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TLPeer ReviewedPostprint (published version

    MarIA: Modelos del Lenguaje en EspaƱol

    Get PDF
    This work presents MarIA, a family of Spanish language models and associated resources made available to the industry and the research community. Currently, MarIA includes RoBERTa-base, RoBERTa-large, GPT2 and GPT2-large Spanish language models, which can arguably be presented as the largest and most proficient language models in Spanish. The models were pretrained using a massive corpus of 570GB of clean and deduplicated texts with 135 billion words extracted from the Spanish Web Archive crawled by the National Library of Spain between 2009 and 2019. We assessed the performance of the models with nine existing evaluation datasets and with a novel extractive Question Answering dataset created ex novo. Overall, MarIA models outperform the existing Spanish models across a variety of NLU tasks and training settings.En este artĆ­culo se presenta MarIA, una familia de modelos del lenguaje en espaƱol y sus correspondientes recursos que se hacen pĆŗblicos para la industria y la comunidad cientĆ­fica. Actualmente MarIA incluye los modelos del lenguaje en espaƱol RoBERTa-base, RoBERTa-large, GPT2 y GPT2-large que pueden considerarse como los modelos mĆ”s grandes y mejores para espaƱol. Los modelos han sido preentrenados utilizando un corpus masivo de 570GB de textos limpios y deduplicados, que comprende un total de 135 mil millones de palabras extraidas del Archivo Web del EspaƱol construido por la Biblioteca Nacional de EspaƱa entre los aƱos 2009 y 2019. Evaluamos el rendimiento de los modelos con nueve conjuntos de datos existentes y con un nuevo conjunto de datos de pregunta-respuesta extractivo creado ex novo. El conjunto de modelos de MarIA supera, en la practica totalidad, el rendimiento de los modelos existentes en espaƱol en las diferentes tareas y configuraciones presentadas.This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL
    corecore