Visual Salience and Perceptual Grouping in Multimodal Interactivity

Abstract

This paper deals with the pragmatic interpretation of multimodal referring expressions in man-machine dialogue systems. We show the importance of building up a structure of the visual context at a semantic level, in order to enrich the significant possibilities of interpretations and to make possible the fusion of this structure with the ones obtained from the linguistic and gesture semantic analyses. Visual salience and perceptual grouping are two notions that guide such a structuring. We thus propose a hierarchy of salience criteria linked to an algorithm that detects salient objects, as well as guidelines for grouping algorithms. We show how the integration of the results of all these algorithms is a complex problem. We propose simple heuristics to reduce this complexity and we conclude on the usability of such heuristics in actual systems

    Similar works