956 research outputs found

    Understanding Heterogeneous EO Datasets: A Framework for Semantic Representations

    Get PDF
    Earth observation (EO) has become a valuable source of comprehensive, reliable, and persistent information for a wide number of applications. However, dealing with the complexity of land cover is sometimes difficult, as the variety of EO sensors reflects in the multitude of details recorded in several types of image data. Their properties dictate the category and nature of the perceptible land structures. The data heterogeneity hampers proper understanding, preventing the definition of universal procedures for content exploitation. The main shortcomings are due to the different human and sensor perception on objects, as well as to the lack of coincidence between visual elements and similarities obtained by computation. In order to bridge these sensory and semantic gaps, the paper presents a compound framework for EO image information extraction. The proposed approach acts like a common ground between the user's understanding, who is visually shortsighted to the visible domain, and the machines numerical interpretation of a much wider information. A hierarchical data representation is considered. At first, basic elements are automatically computed. Then, users can enforce their judgement on the data processing results until semantic structures are revealed. This procedure completes a user-machine knowledge transfer. The interaction is formalized as a dialogue, where communication is determined by a set of parameters guiding the computational process at each level of representation. The purpose is to maintain the data-driven observable connected to the level of semantics and to human awareness. The proposed concept offers flexibility and interoperability to users, allowing them to generate those results that best fit their application scenario. The experiments performed on different satellite images demonstrate the ability to increase the performances in case of semantic annotation by adjusting a set of parameters to the particularities of the analyzed data

    Geospatial Semantics

    Full text link
    Geospatial semantics is a broad field that involves a variety of research areas. The term semantics refers to the meaning of things, and is in contrast with the term syntactics. Accordingly, studies on geospatial semantics usually focus on understanding the meaning of geographic entities as well as their counterparts in the cognitive and digital world, such as cognitive geographic concepts and digital gazetteers. Geospatial semantics can also facilitate the design of geographic information systems (GIS) by enhancing the interoperability of distributed systems and developing more intelligent interfaces for user interactions. During the past years, a lot of research has been conducted, approaching geospatial semantics from different perspectives, using a variety of methods, and targeting different problems. Meanwhile, the arrival of big geo data, especially the large amount of unstructured text data on the Web, and the fast development of natural language processing methods enable new research directions in geospatial semantics. This chapter, therefore, provides a systematic review on the existing geospatial semantic research. Six major research areas are identified and discussed, including semantic interoperability, digital gazetteers, geographic information retrieval, geospatial Semantic Web, place semantics, and cognitive geographic concepts.Comment: Yingjie Hu (2017). Geospatial Semantics. In Bo Huang, Thomas J. Cova, and Ming-Hsiang Tsou et al. (Eds): Comprehensive Geographic Information Systems, Elsevier. Oxford, U

    Intelligent Image Retrieval Techniques: A Survey

    Get PDF
    AbstractIn the current era of digital communication, the use of digital images has increased for expressing, sharing and interpreting information. While working with digital images, quite often it is necessary to search for a specific image for a particular situation based on the visual contents of the image. This task looks easy if you are dealing with tens of images but it gets more difficult when the number of images goes from tens to hundreds and thousands, and the same content-based searching task becomes extremely complex when the number of images is in the millions. To deal with the situation, some intelligent way of content-based searching is required to fulfill the searching request with right visual contents in a reasonable amount of time. There are some really smart techniques proposed by researchers for efficient and robust content-based image retrieval. In this research, the aim is to highlight the efforts of researchers who conducted some brilliant work and to provide a proof of concept for intelligent content-based image retrieval techniques

    A Decision Support System For The Intelligence Satellite Analyst

    Get PDF
    The study developed a decision support system known as Visual Analytic Cognitive Model (VACOM) to support the Intelligence Analyst (IA) in satellite information processing task within a Geospatial Intelligence (GEOINT) domain. As a visual analytics, VACOM contains the image processing algorithms, a cognitive network of the IA mental model, and a Bayesian belief model for satellite information processing. A cognitive analysis tool helps to identify eight knowledge levels in a satellite information processing. These are, spatial, prototypical, contextual, temporal, semantic, pragmatic, intentional, and inferential knowledge levels, respectively. A cognitive network was developed for each knowledge level with data input from the subjective questionnaires that probed the analysts’ mental model. VACOM interface was designed to allow the analysts have a transparent view of the processes, including, visualization model, and signal processing model applied to the images, geospatial data representation, and the cognitive network of expert beliefs. VACOM interface allows the user to select a satellite image of interest, select each of the image analysis methods for visualization, and compare ‘ground-truth’ information against the recommendation of VACOM. The interface was designed to enhance perception, cognition, and even comprehension to the multi and complex image analyses by the analysts. A usability analysis on VACOM showed many advantages for the human analysts. These include, reduction in cognitive workload as a result of less information search, the IA can conduct an interactive experiment on each of his/her belief space and guesses, and selection of best image processing algorithms to apply to an image context

    Efficient video collection association using geometry-aware Bag-of-Iconics representations

    Get PDF
    Abstract Recent years have witnessed the dramatic evolution in visual data volume and processing capabilities. For example, technical advances have enabled 3D modeling from large-scale crowdsourced photo collections. Compared to static image datasets, exploration and exploitation of Internet video collections are still largely unsolved. To address this challenge, we first propose to represent video contents using a histogram representation of iconic imagery attained from relevant visual datasets. We then develop a data-driven framework for a fully unsupervised extraction of such representations. Our novel Bag-of-Iconics (BoI) representation efficiently analyzes individual videos within a large-scale video collection. We demonstrate our proposed BoI representation with two novel applications: (1) finding video sequences connecting adjacent landmarks and aligning reconstructed 3D models and (2) retrieving geometrically relevant clips from video collections. Results on crowdsourced datasets illustrate the efficiency and effectiveness of our proposed Bag-of-Iconics representation

    Interactive models for latent information discovery in satellite images

    Get PDF
    The recent increase in Earth Observation (EO) missions has resulted in unprecedented volumes of multi-modal data to be processed, understood, used and stored in archives. The advanced capabilities of satellite sensors become useful only when translated into accurate, focused information, ready to be used by decision makers from various fields. Two key problems emerge when trying to bridge the gap between research, science and multi-user platforms: (1) The current systems for data access permit only queries by geographic location, time of acquisition, type of sensor, but this information is often less important than the latent, conceptual content of the scenes; (2) simultaneously, many new applications relying on EO data require the knowledge of complex image processing and computer vision methods for understanding and extracting information from the data. This dissertation designs two important concept modules of a theoretical image information mining (IIM) system for EO: semantic knowledge discovery in large databases and data visualization techniques. These modules allow users to discover and extract relevant conceptual information directly from satellite images and generate an optimum visualization for this information. The first contribution of this dissertation brings a theoretical solution that bridges the gap and discovers the semantic rules between the output of state-of-the-art classification algorithms and the semantic, human-defined, manually-applied terminology of cartographic data. The set of rules explain in latent, linguistic concepts the contents of satellite images and link the low-level machine language to the high-level human understanding. The second contribution of this dissertation is an adaptive visualization methodology used to assist the image analyst in understanding the satellite image through optimum representations and to offer cognitive support in discovering relevant information in the scenes. It is an interactive technique applied to discover the optimum combination of three spectral features of a multi-band satellite image that enhance visualization of learned targets and phenomena of interest. The visual mining module is essential for an IIM system because all EO-based applications involve several steps of visual inspection and the final decision about the information derived from satellite data is always made by a human operator. To ensure maximum correlation between the requirements of the analyst and the possibilities of the computer, the visualization tool models the human visual system and secures that a change in the image space is equivalent to a change in the perception space of the operator. This thesis presents novel concepts and methods that help users access and discover latent information in archives and visualize satellite scenes in an interactive, human-centered and information-driven workflow.Der aktuelle Anstieg an Erdbeobachtungsmissionen hat zu einem Anstieg von multi-modalen Daten geführt die verarbeitet, verstanden, benutzt und in Archiven gespeichert werden müssen. Die erweiterten Fähigkeiten von Satellitensensoren sind nur dann von Entscheidungstraegern nutzbar, wenn sie in genaue, fokussierte Information liefern. Es bestehen zwei Schlüsselprobleme beim Versuch die Lücke zwischen Forschung, Wissenschaft und Multi-User-Systeme zu füllen: (1) Die aktuellen Systeme für Datenzugriffe erlauben nur Anfragen basierend auf geografischer Position, Aufzeichnungszeit, Sensortyp. Aber diese Informationen sind oft weniger wichtig als der latente, konzeptuelle Inhalt der Szenerien. (2) Viele neue Anwendungen von Erdbeobachtungsdaten benötigen Wissen über komplexe Bildverarbeitung und Computer Vision Methoden um Information verstehen und extrahieren zu können. Diese Dissertation zeigt zwei wichtige Konzeptmodule eines theoretischen Image Information Mining (IIM) Systems für Erdbeobachtung auf: Semantische Informationsentdeckung in grossen Datenbanken und Datenvisualisierungstechniken. Diese Module erlauben Benutzern das Entdecken und Extrahieren relevanter konzeptioneller Informationen direkt aus Satellitendaten und die Erzeugung von optimalen Visualisierungen dieser Informationen. Der erste Beitrag dieser Dissertation bringt eine theretische Lösung welche diese Lücke überbrückt und entdeckt semantische Regeln zwischen dem Output von state-of-the-art Klassifikationsalgorithmen und semantischer, menschlich definierter, manuell angewendete Terminologie von kartographischen Daten. Ein Satz von Regeln erkläret in latenten, linguistischen Konzepten den Inhalte von Satellitenbildern und verbinden die low-level Maschinensprache mit high-level menschlichen Verstehen. Der zweite Beitrag dieser Dissertation ist eine adaptive Visualisierungsmethode die einem Bildanalysten im Verstehen der Satellitenbilder durch optimale Repräsentation hilft und die kognitive Unterstützung beim Entdecken von relevenanter Informationen in Szenerien bietet. Die Methode ist ein interaktive Technik die angewendet wird um eine optimale Kombination von von drei Spektralfeatures eines Multiband-Satellitenbildes welche die Visualisierung von gelernten Zielen and Phänomenen ermöglichen. Das visuelle Mining-Modul ist essentiell für IIM Systeme da alle erdbeobachtungsbasierte Anwendungen mehrere Schritte von visueller Inspektion benötigen und davon abgeleitete Informationen immer vom Operator selbst gemacht werden müssen. Um eine maximale Korrelation von Anforderungen des Analysten und den Möglichkeiten von Computern sicher zu stellen, modelliert das Visualisierungsmodul das menschliche Wahrnehmungssystem und stellt weiters sicher, dass eine Änderung im Bildraum äquivalent zu einer Änderung der Wahrnehmung durch den Operator ist. Diese These präsentieret neuartige Konzepte und Methoden, die Anwendern helfen latente Informationen in Archiven zu finden und visualisiert Satellitenszenen in einem interaktiven, menschlich zentrierten und informationsgetriebenen Arbeitsprozess

    Search improvement within the geospatial web in the context of spatial data infrastructures

    Get PDF
    El trabajo desarrollado en esta tesis doctoral demuestra que es posible mejorar la búsqueda en el contexto de las Infraestructuras de Datos Espaciales mediante la aplicación de técnicas y buenas prácticas de otras comunidades científicas, especialmente de las comunidades de la Web y de la Web Semántica (por ejemplo, Linked Data). El uso de las descripciones semánticas y las aproximaciones basadas en el contenido publicado por la comunidad geoespacial pueden ayudar en la búsqueda de información sobre los fenómenos geográficos, y en la búsqueda de recursos geoespaciales en general. El trabajo comienza con un análisis de una aproximación para mejorar la búsqueda de las entidades geoespaciales desde la perspectiva de geocodificación tradicional. La arquitectura de geocodificación compuesta propuesta en este trabajo asegura una mejora de los resultados de geocodificación gracias a la utilización de diferentes proveedores de información geográfica. En este enfoque, el uso de patrones estructurales de diseño y ontologías en esta aproximación permite una arquitectura avanzada en términos de extensibilidad, flexibilidad y adaptabilidad. Además, una arquitectura basada en la selección de servicio de geocodificación permite el desarrollo de una metodología de la georreferenciación de diversos tipos de información geográfica (por ejemplo, direcciones o puntos de interés). A continuación, se presentan dos aplicaciones representativas que requieren una caracterización semántica adicional de los recursos geoespaciales. El enfoque propuesto en este trabajo utiliza contenidos basados en heurísticas para el muestreo de un conjunto de recursos geopesaciales. La primera parte se dedica a la idea de la abstracción de un fenómeno geográfico de su definición espacial. La investigación muestra que las buenas prácticas de la Web Semántica se puede reutilizar en el ámbito de una Infraestructura de Datos Espaciales para describir los servicios geoespaciales estandarizados por Open Geospatial Consortium por medio de geoidentificadores (es decir, por medio de las entidades de una ontología geográfica). La segunda parte de este capítulo desglosa la aquitectura y componentes de un servicio de geoprocesamiento para la identificación automática de ortoimágenes ofrecidas a través de un servicio estándar de publicación de mapas (es decir, los servicios que siguen la especificación OGC Web Map Service). Como resultado de este trabajo se ha propuesto un método para la identificación de los mapas ofrecidos por un Web Map Service que son ortoimágenes. A continuación, el trabajo se dedica al análisis de cuestiones relacionadas con la creación de los metadatos de recursos de la Web en el contexto del dominio geográfico. Este trabajo propone una arquitectura para la generación automática de conocimiento geográfico de los recursos Web. Ha sido necesario desarrollar un método para la estimación de la cobertura geográfica de las páginas Web. Las heurísticas propuestas están basadas en el contenido publicado por os proveedores de información geográfica. El prototipo desarrollado es capaz de generar metadatos. El modelo generado contiene el conjunto mínimo recomendado de elementos requeridos por un catálogo que sigue especificación OGC Catalogue Service for the Web, el estandar recomendado por deiferentes Infraestructuras de Datos Espaciales (por ejemplo, the Infrastructure for Spatial Information in the European Community (INSPIRE)). Además, este estudio determina algunas características de la Web Geoespacial actual. En primer lugar, ofrece algunas características del mercado de los proveedores de los recursos Web de la información geográfica. Este estudio revela algunas prácticas de la comunidad geoespacial en la producción de metadatos de las páginas Web, en particular, la falta de metadatos geográficos. Todo lo anterior es la base del estudio de la cuestión del apoyo a los usuarios no expertos en la búsqueda de recursos de la Web Geoespacial. El motor de búsqueda dedicado a la Web Geoespacial propuesto en este trabajo es capaz de usar como base un motor de búsqueda existente. Por otro lado, da soporte a la búsqueda exploratoria de los recursos geoespaciales descubiertos en la Web. El experimento sobre la precisión y la recuperación ha demostrado que el prototipo desarrollado en este trabajo es al menos tan bueno como el motor de búsqueda remoto. Un estudio dedicado a la utilidad del sistema indica que incluso los no expertos pueden realizar una tarea de búsqueda con resultados satisfactorios

    Access to Digital Cultural Heritage: Innovative Applications of Automated Metadata Generation Chapter 1: Digitization of Cultural Heritage – Standards, Institutions, Initiatives

    Get PDF
    The first chapter "Digitization of Cultural Heritage – Standards, Institutions, Initiatives" provides an introduction to the area of digitisation. The main pillars of process of creating, preserving and accessing of cultural heritage in digital space are observed. The importance of metadata in the process of accessing to information is outlined. The metadata schemas and standards used in cultural heritage are discussed. In order to reach digital objects in virtual space they are organized in digital libraries. Contemporary digital libraries are trying to deliver richer and better functionality, which usually is user oriented and depending on current IT trend. Additionally, the chapter is focused on some initiatives on world and European level that during the years enforce the process of digitization and organizing digital objects in the cultural heritage domain. In recent years, the main focus in the creation of digital resources shifts from "system-centred" to "user-centred" since most of the issues around this content are related to making it accessible and usable for the real users. So, the user studies and involving the users on early stages of design and planning the functionality of the product which is being developed stands on leading position

    Geospatial Data Indexing Analysis and Visualization via Web Services with Autonomic Resource Management

    Get PDF
    With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation
    corecore