1,081 research outputs found
The toy box problem (and a preliminary solution)
The evaluation of incremental progress towards 'Strong AI' or 'AGI' remains a challenging open problem. In this paper, we draw inspiration from benchmarks used in artificial commonsense reasoning to propose a new benchmark problem- the Toy Box Problem-that tests the practical real-world intelligence and learning capabilities of an agent. An important aspect of a benchmark is that it is realistic and plausibly achievable; as such, we outline a preliminary solution based on the Comirit Framework
Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation
Determining the plausibility of causal relations between clauses is a
commonsense reasoning task that requires complex inference ability. The general
approach to this task is to train a large pretrained language model on a
specific dataset. However, the available training data for the task is often
scarce, which leads to instability of model training or reliance on the shallow
features of the dataset. This paper presents a number of techniques for making
models more robust in the domain of causal reasoning. Firstly, we perform
adversarial training by generating perturbed inputs through synonym
substitution. Secondly, based on a linguistic theory of discourse connectives,
we perform data augmentation using a discourse parser for detecting causally
linked clauses in large text, and a generative language model for generating
distractors. Both methods boost model performance on the Choice of Plausible
Alternatives (COPA) dataset, as well as on a Balanced COPA dataset, which is a
modified version of the original data that has been developed to avoid
superficial cues, leading to a more challenging benchmark. We show a
statistically significant improvement in performance and robustness on both
datasets, even with only a small number of additionally generated data points.Comment: 7 pages + pages references, 4 figures, 3 tables, paper accepted at
AAAI202
- …