On the Need for Imagistic Modeling in Story Understanding

Abstract

There is ample evidence that human understanding of ordinary language relies in part on a rich capacity for imagistic mental modeling. We argue that genuine language understanding in machines will similarly require an imagistic modeling capacity enabling fast construction of instances of prototypical physical situations and events, whose participants are drawn from a wide variety of entity types, including animate agents. By allowing fast evaluation of predicates such as ‘can-see’, ‘under’, and ‘inside’, these model instances support coherent text interpre-tation. Imagistic modeling is thus a crucial – and not very broadly appreciated – aspect of the long-standing knowledge acquisition bottleneck in AI. We will illustrate how the need for imagistic modeling arises even in the simplest first-reader stories for children, and provide an initial feasibility study to indicate what the architecture of a system combining symbolic with imagistic understanding might look like

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 29/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.