1,552 research outputs found

    MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning

    Full text link
    Prompt-based learning reformulates downstream tasks as cloze problems by combining the original input with a template. This technique is particularly useful in few-shot learning, where a model is trained on a limited amount of data. However, the limited templates and text used in few-shot prompt-based learning still leave significant room for performance improvement. Additionally, existing methods using model ensembles can constrain the model efficiency. To address these issues, we propose an augmentation method called MixPro, which augments both the vanilla input text and the templates through token-level, sentence-level, and epoch-level Mixup strategies. We conduct experiments on five few-shot datasets, and the results show that MixPro outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation.Comment: Under review at the Frontiers of Computer Science (https://www.springer.com/journal/11704/); 14 pages, 4 figures, 5 table

    Comb-e-Chem: an e-science research project

    No full text
    The background to the Comb-e-Chem e-Science pilot project funded under the UK-Science Programme is presented and the areas being addresses within chemistry and more specifically combinatorial chemistry are discussed. The ways in which the ideas underlying the application of computer technology can improve the production, analysis and dissemination of chemical information and knowledge in a collaborative environment are discussed
    corecore