In recent years, substantial advancements in pre-trained language models have
paved the way for the development of numerous non-English language versions,
with a particular focus on encoder-only and decoder-only architectures. While
Spanish language models encompassing BERT, RoBERTa, and GPT have exhibited
prowess in natural language understanding and generation, there remains a
scarcity of encoder-decoder models designed for sequence-to-sequence tasks
involving input-output pairs. This paper breaks new ground by introducing the
implementation and evaluation of renowned encoder-decoder architectures,
exclusively pre-trained on Spanish corpora. Specifically, we present Spanish
versions of BART, T5, and BERT2BERT-style models and subject them to a
comprehensive assessment across a diverse range of sequence-to-sequence tasks,
spanning summarization, rephrasing, and generative question answering. Our
findings underscore the competitive performance of all models, with BART and T5
emerging as top performers across all evaluated tasks. As an additional
contribution, we have made all models publicly available to the research
community, fostering future exploration and development in Spanish language
processing