394,542 research outputs found
Universal Language Model Fine-tuning for Text Classification
Inductive transfer learning has greatly impacted computer vision, but
existing approaches in NLP still require task-specific modifications and
training from scratch. We propose Universal Language Model Fine-tuning
(ULMFiT), an effective transfer learning method that can be applied to any task
in NLP, and introduce techniques that are key for fine-tuning a language model.
Our method significantly outperforms the state-of-the-art on six text
classification tasks, reducing the error by 18-24% on the majority of datasets.
Furthermore, with only 100 labeled examples, it matches the performance of
training from scratch on 100x more data. We open-source our pretrained models
and code.Comment: ACL 2018, fixed denominator in Equation 3, line
Transfer Learning for Textual Topic Classificaton
NedávnĂ© vĂ˝voje v jazykovĂ˝ch modelech vedly k posunu v transfer learning metodách ve zpracovánĂ pĹ™irozenĂ©ho jazyka. JazykovĂ© modely pĹ™edtrĂ©novanĂ© na rozsáhlĂ˝ch obecnĂ˝ch datasetech dosahujĂ nejlepšĂch vĂ˝sledkĹŻ v celĂ© Ĺ™adÄ› ĂşkolĹŻ. Universal Language Model Fine-tuning pĹ™edstavuje efektivnĂ transfer learning metodu pro klasifikaci texu. CĂlem tĂ©to práce je hloubÄ›ji otestovat robustnost tĂ©to metody ve scĂ©nářĂch, kterĂ© se běžnÄ› nacházejĂ pĹ™i reálnĂ˝ch aplikacĂch.The recent developments of Language Modeling led to advances in transfer learning methods in Natural Language Processing. Language Models pretrained on large general datasets achieved state-of-the-art results in a wide range of tasks. The Universal Language Model Fine-tuning represents an effective transfer learning method for text classification. The goal of this thesis is to further test the robustness of this method in scenarios, commonly found in real-world applications
- …