3 research outputs found

    Semantics-Enabled Framework for Spatial Image Information Mining of Linked Earth Observation Data

    No full text
    Recent developments in sensor technology are contributing toward the tremendous growth of remote sensing (RS) archives (currently, at the petabyte scale). However, this data largely remain unexploited due to the current limitations in the data discovery, querying, and retrieval capabilities. This issue becomes exacerbated in disaster situations, where there is a need for rapid processing and retrieval of the affected areas. Furthermore, the retrieval of images based on the spatial configurations of affected regions [land use/cover (LULC) classes] in an image is important in disaster situations such as floods and earthquakes. The majority of existing Earth observation (EO) image information mining (IIM) systems does not consider the spatial relations among image regions during image retrieval (aka spatial semantic gap). In this work, we have specifically addressed two issues, i.e., explicit modeling of topological and directional relationships between image regions, and development of a resource description framework (RDF)-based spatial semantic graphs (SSGs). This enables more intuitive querying and reasoning on the archived data. A spatial IIM (SIIM) framework is proposed, which integrates a logic-based reasoning mechanism to extract the hidden spatial relationships (both topological and directional) and enables image retrieval based on spatial relationships. The system is tested using several spatial relations-based queries on the RS image repository of flood-affected areas to check its applicability in post flood scenario. Precision, recall, and F-measure metrics were used to evaluate the performance of the SIIM system, which showed good potential for spatial relations-based image retrieval
    corecore