8 research outputs found

    Database Technology for Processing Temporal Data

    Get PDF

    Constructing and interrogating actor histories

    Get PDF
    Complex systems, such as organizations, can be represented as executable simulation models using actor-based languages. Decision-making can be supported by system simulation so that different configurations provide a basis for what-if analysis. Actor-based models are expressed in terms of large numbers of concurrent actors that communicate using asynchronous messages leading to complex non-deterministic behaviour. This chapter addresses the problem of analyzing the results of model executions and proposes a general approach that can be added to any actor-based system. The approach uses a logic programming language with temporal extensions to query execution traces. The approach has been implemented and is shown to support a representative system model

    Constructing and interrogating actor histories

    Get PDF
    Complex systems, such as organizations, can be represented as executable simulation models using actor-based languages. Decision-making can be supported by system simulation so that different configurations provide a basis for what-if analysis. Actor-based models are expressed in terms of large numbers of concurrent actors that communicate using asynchronous messages leading to complex non-deterministic behaviour. This chapter addresses the problem of analyzing the results of model executions and proposes a general approach that can be added to any actor-based system. The approach uses a logic programming language with temporal extensions to query execution traces. The approach has been implemented and is shown to support a representative system model

    Backlogs and Interval Timestamps: Building Blocks for Supporting Temporal Queries in Graph Databases Work in progress paper

    Get PDF
    ABSTRACT The analysis of networks, either at a single point in time or through their evolution, is an increasingly important task in modern data management. Graph databases are uniquely suited to improve static network analysis. However, there's still no consensus on how to best model data evolution with these databases. In our work we propose an elementary concept to support temporal analysis with property graph databases, using a single-graph model limited to structural changes. We manage the temporal aspects of items with interval timestamps and backlogs. To include backlogs in the model we examine two alternatives: (1) global indexes, and (2) using the graph as an index by resorting to timestamp denormalization. We evaluate density calculation and time slice retrieval over successive days from a SNAP dataset, on an Apache Titan prototype of our model, observing from 2x to 100x response time gains by comparing differential vs. snapshot methods; and no conclusive difference between the backlog alternatives

    Comprehensive and interactive temporal query processing with SAP HANA

    Get PDF
    In this demo, we present a prototype of a main memory database system which provides a wide range of temporal operators featuring predictable and interactive response times. Much of real-life data is temporal in nature, and there is an increasing application demand for temporal models and operations in databases. Nevertheless, SQL:2011 has only recently overcome a decade-long standstill on standardizing temporal features. As a result, few database systems provide any temporal support, and even those only have limited expressiveness and poor performance. Our prototype combines an in-memory column store and a novel, generic temporal index structure named Timeline Index. As we will show on a workload based on real customer use cases, it achieves predictable and interactive query performance for a wide range of temporal query types and data sizes.</jats:p

    Advanced distributed data integration infrastructure and research data management portal

    Get PDF
    The amount of data available due to the rapid spread of advanced information technology is exploding. At the same time, continued research on data integration systems aims to provide users with uniform data access and efficient data sharing. The ability to share data is particularly important for interdisciplinary research, where a comprehensive picture of the subject requires large amounts of data from disparate data sources from a variety of disciplines. While there are numerous data sets available from various groups worldwide, the existing data sources are principally oriented toward regional comparative efforts rather than global applications. They vary widely both in content and format. Such data sources cannot be easily integrated, and maintained by small groups of developers. I propose an advanced infrastructure for large-scale data integration based on crowdsourcing. In particular, I propose a novel architecture and algorithms to efficiently store dynamically incoming heterogeneous datasets enabling both data integration and data autonomy. My proposed infrastructure combines machine learning algorithms and human expertise to perform efficient schema alignment and maintain relationships between the datasets. It provides efficient data exploration functionality without requiring users to write complex queries, as well as performs approximate information fusion when exact match does not exist. Finally, I introduce Col*Fusion system that implements the proposed advance data integration infrastructure
    corecore