1,155 research outputs found

    The Role of Pragmatics in Solving the Winograd Schema Challenge

    Get PDF
    Different aspects and approaches to commonsense reasoning have been investigated in order to provide solutions for the Winograd Schema Challenge (WSC). The vast complexities of natural language processing (parsing, assigning word sense, integrating context, pragmatics and world-knowledge, ...) give broad appeal to systems based on statistical analysis of corpora. However, solutions based purely on learning from corpora are not currently able to capture the semantics underlying the WSC - which was intended to provide problems whose solution requires knowledge and reasoning, rather than statistical analysis of superficial lexical features. In this paper we consider the WSC as a means for highlighting challenges in the field of commonsense reasoning more generally. We begin by discussing issues with current approaches to the WSC. Following this we outline some key challenges faced, in particular highlighting the importance of dealing with pragmatics. We then argue for an alternative approach which favours the use of knowledge bases where the deep semantics of the different interpretations of commonsense terms are formalised. Furthermore, we suggest using heuristic approaches based on pragmatics to determine appropriate configurations of both reasonable interpretations of terms and necessary assumptions about the world

    Beyond the Obvious: Evaluating the Reasoning Ability In Real-life Scenarios of Language Models on Life Scapes Reasoning Benchmark~(LSR-Benchmark)

    Full text link
    This paper introduces the Life Scapes Reasoning Benchmark (LSR-Benchmark), a novel dataset targeting real-life scenario reasoning, aiming to close the gap in artificial neural networks' ability to reason in everyday contexts. In contrast to domain knowledge reasoning datasets, LSR-Benchmark comprises free-text formatted questions with rich information on real-life scenarios, human behaviors, and character roles. The dataset consists of 2,162 questions collected from open-source online sources and is manually annotated to improve its quality. Experiments are conducted using state-of-the-art language models, such as gpt3.5-turbo and instruction fine-tuned llama models, to test the performance in LSR-Benchmark. The results reveal that humans outperform these models significantly, indicating a persisting challenge for machine learning models in comprehending daily human life

    Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation

    Full text link
    Determining the plausibility of causal relations between clauses is a commonsense reasoning task that requires complex inference ability. The general approach to this task is to train a large pretrained language model on a specific dataset. However, the available training data for the task is often scarce, which leads to instability of model training or reliance on the shallow features of the dataset. This paper presents a number of techniques for making models more robust in the domain of causal reasoning. Firstly, we perform adversarial training by generating perturbed inputs through synonym substitution. Secondly, based on a linguistic theory of discourse connectives, we perform data augmentation using a discourse parser for detecting causally linked clauses in large text, and a generative language model for generating distractors. Both methods boost model performance on the Choice of Plausible Alternatives (COPA) dataset, as well as on a Balanced COPA dataset, which is a modified version of the original data that has been developed to avoid superficial cues, leading to a more challenging benchmark. We show a statistically significant improvement in performance and robustness on both datasets, even with only a small number of additionally generated data points.Comment: 7 pages + pages references, 4 figures, 3 tables, paper accepted at AAAI202
    corecore