1 research outputs found
Sparse associative memory based on contextual code learning for disambiguating word senses
In recent literature, contextual pretrained Language Models (LMs)
demonstrated their potential in generalizing the knowledge to several Natural
Language Processing (NLP) tasks including supervised Word Sense Disambiguation
(WSD), a challenging problem in the field of Natural Language Understanding
(NLU). However, word representations from these models are still very dense,
costly in terms of memory footprint, as well as minimally interpretable. In
order to address such issues, we propose a new supervised biologically inspired
technique for transferring large pre-trained language model representations
into a compressed representation, for the case of WSD. Our produced
representation contributes to increase the general interpretability of the
framework and to decrease memory footprint, while enhancing performance