19 research outputs found

    Ratatouille: A tool for Novel Recipe Generation

    Full text link
    Due to availability of a large amount of cooking recipes online, there is a growing interest in using this as data to create novel recipes. Novel Recipe Generation is a problem in the field of Natural Language Processing in which our main interest is to generate realistic, novel cooking recipes. To come up with such novel recipes, we trained various Deep Learning models such as LSTMs and GPT-2 with a large amount of recipe data. We present Ratatouille (https://cosylab.iiitd.edu.in/ratatouille2/), a web based application to generate novel recipes.Comment: 4 pages, 5 figures, 38th IEEE International Conference on Data Engineering, DECOR Worksho

    On the Importance of Word and Sentence Representation Learning in Implicit Discourse Relation Classification

    Full text link
    Implicit discourse relation classification is one of the most difficult parts in shallow discourse parsing as the relation prediction without explicit connectives requires the language understanding at both the text span level and the sentence level. Previous studies mainly focus on the interactions between two arguments. We argue that a powerful contextualized representation module, a bilateral multi-perspective matching module, and a global information fusion module are all important to implicit discourse analysis. We propose a novel model to combine these modules together. Extensive experiments show that our proposed model outperforms BERT and other state-of-the-art systems on the PDTB dataset by around 8% and CoNLL 2016 datasets around 16%. We also analyze the effectiveness of different modules in the implicit discourse relation classification task and demonstrate how different levels of representation learning can affect the results.Comment: Accepted by IJCAI 202

    Modeling Coherence for Discourse Neural Machine Translation

    Full text link
    Discourse coherence plays an important role in the translation of one text. However, the previous reported models most focus on improving performance over individual sentence while ignoring cross-sentence links and dependencies, which affects the coherence of the text. In this paper, we propose to use discourse context and reward to refine the translation quality from the discourse perspective. In particular, we generate the translation of individual sentences at first. Next, we deliberate the preliminary produced translations, and train the model to learn the policy that produces discourse coherent text by a reward teacher. Practical results on multiple discourse test datasets indicate that our model significantly improves the translation quality over the state-of-the-art baseline system by +1.23 BLEU score. Moreover, our model generates more discourse coherent text and obtains +2.2 BLEU improvements when evaluated by discourse metrics.Comment: Accepted by AAAI201
    corecore