2 research outputs found
Representing ELMo embeddings as two-dimensional text online
We describe a new addition to the WebVectors toolkit which is used to serve
word embedding models over the Web. The new ELMoViz module adds support for
contextualized embedding architectures, in particular for ELMo models. The
provided visualizations follow the metaphor of `two-dimensional text' by
showing lexical substitutes: words which are most semantically similar in
context to the words of the input sentence. The system allows the user to
change the ELMo layers from which token embeddings are inferred. It also
conveys corpus information about the query words and their lexical substitutes
(namely their frequency tiers and parts of speech). The module is well
integrated into the rest of the WebVectors toolkit, providing lexical
hyperlinks to word representations in static embedding models. Two web services
have already implemented the new functionality with pre-trained ELMo models for
Russian, Norwegian and English.Comment: EACL'2021 demo pape
Large-Scale Contextualised Language Modelling for Norwegian
We present the ongoing NorLM initiative to support the creation and use of
very large contextualised language models for Norwegian (and in principle other
Nordic languages), including a ready-to-use software environment, as well as an
experience report for data preparation and training. This paper introduces the
first large-scale monolingual language models for Norwegian, based on both the
ELMo and BERT frameworks. In addition to detailing the training process, we
present contrastive benchmark results on a suite of NLP tasks for Norwegian.
For additional background and access to the data, models, and software, please
see http://norlm.nlpl.euComment: Accepted to NoDaLiDa'202