Multi-Modal Spatial Querying

Abstract

This project investigates the use of two concurrent communication channels, graphics and speech, to achieve a successful interaction between a person and a geographic information system (GIS). The objective is to construct a multi-modal spatial query language in which users interact with a geographic database by drawing sketches of the desired configuration, while simultaneously talking about the spatial objects and the spatial relations drawn. This study will increase our understanding of multi-modal spatial interactions, and will lead to improved strategies for intelligent integration and processing of such multi-modal spatial queries in a GIS. The key to this interaction is the exploitation of complementary or redundant information present in both graphical and verbal descriptions of the same spatial scenes. A multiple-resolution model of spatial relations is used to capture the essential aspects of a sketch and its corresponding verbal description. The model stresses topological properties, such as containment and neighborhood, and considers metrical properties, such as distance and directions, as refinements where necessary. This model enables the retrieval of similar, not only exact, matches between a spatial query and a geographic database. Such new methods of multi-modal spatial querying and spatial similarity retrieval will empower experts as well as novice users to perform easier spatial searches, ultimately providing new user communities access to spatial databases

    Similar works