3,145 research outputs found

    From Sensor to Observation Web with Environmental Enablers in the Future Internet

    Get PDF
    This paper outlines the grand challenges in global sustainability research and the objectives of the FP7 Future Internet PPP program within the Digital Agenda for Europe. Large user communities are generating significant amounts of valuable environmental observations at local and regional scales using the devices and services of the Future Internet. These communities’ environmental observations represent a wealth of information which is currently hardly used or used only in isolation and therefore in need of integration with other information sources. Indeed, this very integration will lead to a paradigm shift from a mere Sensor Web to an Observation Web with semantically enriched content emanating from sensors, environmental simulations and citizens. The paper also describes the research challenges to realize the Observation Web and the associated environmental enablers for the Future Internet. Such an environmental enabler could for instance be an electronic sensing device, a web-service application, or even a social networking group affording or facilitating the capability of the Future Internet applications to consume, produce, and use environmental observations in cross-domain applications. The term ?envirofied? Future Internet is coined to describe this overall target that forms a cornerstone of work in the Environmental Usage Area within the Future Internet PPP program. Relevant trends described in the paper are the usage of ubiquitous sensors (anywhere), the provision and generation of information by citizens, and the convergence of real and virtual realities to convey understanding of environmental observations. The paper addresses the technical challenges in the Environmental Usage Area and the need for designing multi-style service oriented architecture. Key topics are the mapping of requirements to capabilities, providing scalability and robustness with implementing context aware information retrieval. Another essential research topic is handling data fusion and model based computation, and the related propagation of information uncertainty. Approaches to security, standardization and harmonization, all essential for sustainable solutions, are summarized from the perspective of the Environmental Usage Area. The paper concludes with an overview of emerging, high impact applications in the environmental areas concerning land ecosystems (biodiversity), air quality (atmospheric conditions) and water ecosystems (marine asset management)

    Semantic Modeling of Analytic-based Relationships with Direct Qualification

    Full text link
    Successfully modeling state and analytics-based semantic relationships of documents enhances representation, importance, relevancy, provenience, and priority of the document. These attributes are the core elements that form the machine-based knowledge representation for documents. However, modeling document relationships that can change over time can be inelegant, limited, complex or overly burdensome for semantic technologies. In this paper, we present Direct Qualification (DQ), an approach for modeling any semantically referenced document, concept, or named graph with results from associated applied analytics. The proposed approach supplements the traditional subject-object relationships by providing a third leg to the relationship; the qualification of how and why the relationship exists. To illustrate, we show a prototype of an event-based system with a realistic use case for applying DQ to relevancy analytics of PageRank and Hyperlink-Induced Topic Search (HITS).Comment: Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015

    Improving Big Data Visual Analytics with Interactive Virtual Reality

    Full text link
    For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing Conference (HPEC '15); corrected typo

    Answer Set Programming Modulo `Space-Time'

    Full text link
    We present ASP Modulo `Space-Time', a declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas where the processing and interpretation of spatio-temporal data is crucial. The framework and resulting system is the only general KR-based method for declaratively reasoning about the dynamics of `space-time' regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), and include diverse application examples involving interpretation and control tasks

    The DIGMAP geo-temporal web gazetteer service

    Get PDF
    This paper presents the DIGMAP geo-temporal Web gazetteer service, a system providing access to names of places, historical periods, and associated geo-temporal information. Within the DIGMAP project, this gazetteer serves as the unified repository of geographic and temporal information, assisting in the recognition and disambiguation of geo-temporal expressions over text, as well as in resource searching and indexing. We describe the data integration methodology, the handling of temporal information and some of the applications that use the gazetteer. Initial evaluation results show that the proposed system can adequately support several tasks related to geo-temporal information extraction and retrieval

    A Geovisual Analytics Approach for Mouse Movement Analysis

    Get PDF
    The use of Web maps has created opportunities and challenges for map generation and delivery. While volunteered geographic information has led to the development of accurate and inexpensive Web maps, the sheer volume of data generated has created spatial information overload. This results in difficulties identifying relevant map features. Geopersonalisation, which adapts map content based on user interests offers a solution to this. The technique is especially powerful when implicit indicators of interest are used as a basis for personalisation. This article describes the design and features of VizAnalysisTools, a suite of tools to visualise and interpret users’ implicit interactions with map content. While traditional data mining techniques can be used to identify trends and preferences, visual analytics, in particular Geovisual Analytics, which assists the human cognition process, has proven useful in detecting interesting patterns. By identifying salient trends, areas of interest on the map become apparent. This knowledge can be used to strengthen the algorithms used for Geopersonalisation

    Combining Geospatial and Temporal Ontologies

    Get PDF
    Publicly available ontologies are growing in number at present. These ontologies describe entities in a domain and the relations among these entities. This thesis describes a method to automatically combine a pair of orthogonal ontologies using cross products. A geospatial ontology and a temporal ontology are combined in this work. Computing the cross product of the geospatial and the temporal ontologies gives a complete set of pairwise combination of terms from the two ontologies. This method offers researchers the benefit of using ontologies that are already existing and available rather than building new ontologies for areas outside their scope of expertise. The resulting framework describes a geospatial domain over all possible temporal granularities or levels, allowing one domain to be understood from the perspective of another domain. Further queries on the framework help a user to make higher order inferences about a domain. In this work, Protege, an open source ontology editor and a knowledge base tool, is used to model ontologies. Protege supports the creation, visualization and manipulation of ontologies in various formats including XML (Extensible Markup Language). Use of standard and extensible languages like XML allows sharing of data across different information systems, and thus supports reuse of these ontologies. Both the geospatial ontology and the temporal ontology are represented in Protege. This thesis demonstrates the usefulness of this integrated spatio-temporal framework for reasoning about geospatial domains. SQL queries can be applied to the cross product to return to the user different kinds of information about their domain. For example, a geospatial term Library can be combined with all terms from the temporal ontology to consider Library over all possible kinds of times, including those that might have been overlooked during previous analyses. Visualizations of cross product spaces using Graphviz provides a means for displaying the geospatial-temporal terms as well as the different relations that link these terms. This visualization step also highlights the structure of the cross product for users. In order to generate a more tractable cross product for analysis purposes, methods for filtering terms from the cross product are also introduced. Filtering results in a more focused understanding of the spatio-temporal framework
    • 

    corecore