11 research outputs found

    Paraphrasing with Large Language Models

    Full text link
    Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sentence level but also for longer spans of text such as paragraphs without needing to break the text into smaller chunks.Comment: Accepted paper for WNGT workshop at EMNLP-IJCNLP 2019. (7 pages including references and supplemental material

    Investigating Prompt Engineering in Diffusion Models

    Full text link
    With the spread of the use of Text2Img diffusion models such as DALL-E 2, Imagen, Mid Journey and Stable Diffusion, one challenge that artists face is selecting the right prompts to achieve the desired artistic output. We present techniques for measuring the effect that specific words and phrases in prompts have, and (in the Appendix) present guidance on the selection of prompts to produce desired effects.Comment: Paper submitted for Creativity and Design workshop at NeurIPS 2022. (4 pages including references + 7 page appendix). We would like to thank Google and the ML Developer Programs Team for their assistance and compute credits used in the experiments for this pape

    Unsupervised Natural Question Answering with a Small Model

    Full text link
    The recent (2019-02) demonstration of the power of huge language models such as GPT-2 to memorise the answers to factoid questions raises questions about the extent to which knowledge is being embedded directly within these large models. This short paper describes an architecture through which much smaller models can also answer such questions - by making use of 'raw' external knowledge. The contribution of this work is that the methods presented here rely on unsupervised learning techniques, complementing the unsupervised training of the Language Model. The goal of this line of research is to be able to add knowledge explicitly, without extensive training.Comment: Accepted paper for FEVER workshop at EMNLP-IJCNLP 2019. (4 pages + references

    Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation

    Full text link
    The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions. Red Dragon AI's entries used the language of the questions and explanation text directly, rather than a constructing a separate graph-like representation. Our leaderboard submission placed us 3rd in the competition, but we present here three methods of increasing sophistication, each of which scored successively higher on the test set after the competition close.Comment: Accepted paper for TextGraphs-13 workshop at EMNLP-IJCNLP 2019. (5 pages including references

    Inclusive rural communication services : Building evidence, informing policy

    No full text
    This publication is the first scoping study aimed at compiling existing evaluation cases in the field of Communication for Development as applied to agricultural and rural development initiatives. It draws on a literature review and 19 cases across Africa, Asia-Pacific, Latin America and the Caribbean comparing evidence of evaluative approaches, methods and outcomes of communication programmes and rural communication services. It also provides clear indications about the need to build evidence that inform policy to advance inclusive rural communication services
    corecore