3 research outputs found
PĹ™evod prĂłzy do poezie pomocĂ neuronovĂ˝ch sĂtĂ
Title: Converting Prose into Poetry with Neural Networks Author: Memduh Gokirmak Institute: Institute of Formal and Applied Linguistics Supervisor: Martin Popel, Institute of Formal and Applied Linguistics Abstract: We present here our attempts to create a system that generates poetry based on a sequence of text provided to it by a user. We explore the use of machine translation and language model technologies based on the neural network architecture. We use different types of data across three languages in our research, and employ and develop metrics to track the quality of the output of the systems we develop. We find that combining machine translation techniques to generate training data to this end with fine-tuning of pre-trained language models provides the most satisfactory generated poetry. Keywords: poetry machine translation language models iiiInstitute of Formal and Applied LinguisticsÚstav formálnà a aplikované lingvistikyMatematicko-fyzikálnà fakultaFaculty of Mathematics and Physic
Converting prose into poetry using neural networks
Title: Converting Prose into Poetry with Neural Networks Author: Memduh Gokirmak Institute: Institute of Formal and Applied Linguistics Supervisor: Martin Popel, Institute of Formal and Applied Linguistics Abstract: We present here our attempts to create a system that generates poetry based on a sequence of text provided to it by a user. We explore the use of machine translation and language model technologies based on the neural network architecture. We use different types of data across three languages in our research, and employ and develop metrics to track the quality of the output of the systems we develop. We find that combining machine translation techniques to generate training data to this end with fine-tuning of pre-trained language models provides the most satisfactory generated poetry. Keywords: poetry machine translation language models ii
CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, one of two tasks was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe data preparation, report and analyze the main results, and provide a brief categorization of the different approaches of the participating syste