1,827 research outputs found
Towards the Creation of a Poetry Translation Mapping System
The translation of poetry is a complex, multifaceted challenge: the translated text should communicate the same meaning, similar metaphoric expressions, and also match the style and prosody of the original poem. Research on machine poetry translation is existing since 2010, but for four reasons it is still rather insufficient:
1. The few approaches existing completely lack any knowledge about current developments in both lyric theory and translation theory.
2. They are based on very small datasets.
3. They mostly ignored the neural learning approach that superseded the long-standing dominance of phrase-based approaches within machine translation.
4. They have no concept concerning the pragmatic function of their research and the resulting tools.
Our paper describes how to improve the existing research and technology for poetry translations in exactly these four points. With regards to 1) we will describe the “Poetics of Translation”. With regards to 2) we will introduce the Worlds largest corpus for poetry translations from lyrikline. With regards to 3) we will describe first steps towards a neural machine translation of poetry. With regards to 4) we will describe first steps towards the development of a poetry translation mapping system
Beyond Narrative Description: Generating Poetry from Images by Multi-Adversarial Training
Automatic generation of natural language from images has attracted extensive
attention. In this paper, we take one step further to investigate generation of
poetic language (with multiple lines) to an image for automatic poetry
creation. This task involves multiple challenges, including discovering poetic
clues from the image (e.g., hope from green), and generating poems to satisfy
both relevance to the image and poeticness in language level. To solve the
above challenges, we formulate the task of poem generation into two correlated
sub-tasks by multi-adversarial training via policy gradient, through which the
cross-modal relevance and poetic language style can be ensured. To extract
poetic clues from images, we propose to learn a deep coupled visual-poetic
embedding, in which the poetic representation from objects, sentiments and
scenes in an image can be jointly learned. Two discriminative networks are
further introduced to guide the poem generation, including a multi-modal
discriminator and a poem-style discriminator. To facilitate the research, we
have released two poem datasets by human annotators with two distinct
properties: 1) the first human annotated image-to-poem pair dataset (with 8,292
pairs in total), and 2) to-date the largest public English poem corpus dataset
(with 92,265 different poems in total). Extensive experiments are conducted
with 8K images, among which 1.5K image are randomly picked for evaluation. Both
objective and subjective evaluations show the superior performances against the
state-of-the-art methods for poem generation from images. Turing test carried
out with over 500 human subjects, among which 30 evaluators are poetry experts,
demonstrates the effectiveness of our approach
- …