5 research outputs found

    A New Model for Generating Multimodal Referring Expressions

    Get PDF
    We present a new algorithm for the generation of multimodal referring expressions (combining language and deictic gestures).1 The approach differs from earlier work in that we allow for various gradations of preciseness in pointing, ranging from unambiguous to vague pointing gestures. The model predicts that linguistic properties realized in the generated expression are co-dependent on the kind of pointing gesture included. The decision to point is based on a tradeoff between the costs of pointing and the costs of linguistic properties, where both kinds of costs are computed in empirically motivated ways. The model has been implemented using a graph-based generation algorithm

    A new model for the generation of multimodal referring expressions

    Get PDF
    corecore