42 research outputs found
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
Modeling semantic plausibility requires commonsense knowledge about the world
and has been used as a testbed for exploring various knowledge representations.
Previous work has focused specifically on modeling physical plausibility and
shown that distributional methods fail when tested in a supervised setting. At
the same time, distributional models, namely large pretrained language models,
have led to improved results for many natural language understanding tasks. In
this work, we show that these pretrained language models are in fact effective
at modeling physical plausibility in the supervised setting. We therefore
present the more difficult problem of learning to model physical plausibility
directly from text. We create a training set by extracting attested events from
a large corpus, and we provide a baseline for training on these attested events
in a self-supervised manner and testing on a physical plausibility task. We
believe results could be further improved by injecting explicit commonsense
knowledge into a distributional model.Comment: Accepted at COIN@EMNLP 201