29 research outputs found

    Developments in spatio-temporal query languages

    Full text link

    Management and visualization of spatiotemporal information in GIS

    Get PDF
    Although Geographic Information Systems (GIS) have been recognised as the most advanced technology for the management of geospatial information, they are still unable to efficiently manage the temporal dimension. Originally this problem affeeted only the study and analysis of highly dynamic phenomena. Today's expansion of GIS technology, the ease to acquire and store geospatial data and the increased capacity of computing technologies to managc large amounl of data have contributed lo the propagation of this problem across the whole geospatial seclor. The extended use of GIS in decision-making processes is increasing the demand for tools able to manage and 10 analyse dynamic geospatial phenomena where the temporal dimension is crucial. The only temporal model available in commercial GIS packages is based on discretisation of temporal data. Changes are represented as a succession of snapshots. The dynamics and what happens between those stages are not registered. In addition, this approach presents severe problems due to unavoidable multiplication of data volume, abundant redundancies, loss in query efficiency and the impossibility of knowing when the exact timing of changes occurs. Since the late 1980s and particularly in the 1990s, researching the temporal changes and the conceptual and technological options available has been undertaken by the GIS and DBMS sectors. The primary objective of the research presented in this paper is the development of a model for the integration of temporal data with GIS. The method adopted to achieve this objective is based on the combination of Time Geography principies, its graphic language and dynamic segmentation techniques used in GIS. Past research has demonstrated that the difficulty to integrate time with GIS has its origin in the continuous nature of time. Dynamic segmentation in GIS network analysis has the potential to provide the means for a time-GIS integration in a continuous manner. Lifelines, one of the main Time geography's graphic language elements, has been modelled as a set of network segments where the dynamics in attribute information has been attached to different time segments rather than distance segments (for exampIe Euclidean or cost-based) as normally occurs in dynamic segmentation. This paper summarises initial findings of the project. These outcomes have the potential to improve the way the geospatial sector currently handles temporal information. However, the static nature of current GIS technology impedes an appropriate visualisation of dynamic temporal phenomena. To this effect, the paper also explores the possibilities offered by multimedia techniques as a complement to GIS capabilities

    Big Data Management and Analytics for Mobility Forecasting in datAcron

    Get PDF
    The exploitation of heterogeneous data sources offering very large historical and streaming data is important to increasing the accuracy of operations when analysing and predicting future states of moving entities (planes, vessels, etc.). This article presents the overall goals and big data challenges addressed by datAcron on big data analytics for time-critical mobility forecasting

    Literature Review on Temporal, Spatial, and Spatiotermpoal Data Models

    Get PDF
    This paper reviews papers on temporal databases, spatial databases, and spatio-temporal databases

    Probabilistic Model To Identify Movement Patterns In Geospatial Data

    Get PDF
    The task of trying to determine the movement pattern of objects based on available databases is a daunting one. Tracking the movement of these dynamic objects is important in different areas to understand the higher order patterns of movement that carry special meaning for a target application. However this is still a largely unsolved problem and recent work has focused on the relationships of moving point objects with stationary objects or landmarks on a map. Global Position System (GPS) is a widely used satellite-based navigation system. Popular use of these devices has produced large collections of data, some of which have been archived. These archived data sets and sometimes real time GPS data are now readily available over the internet and their analysis through computational methods can generate meaningful insights. These insights when applied appropriately can be used in everyday life. The purpose of this research is to make the case that automated analysis can provide insight that can otherwise be difficult to achieve due to the large volume and noisy characteristics of GPS data. We present experiments that have been performed on one of these archived databases which contain GPS traces of 536 yellow cabs in the San Francisco Bay area. Using data analysis, we determine the most visited tourist destinations within the San Francisco Bay area during the time period of the captured data. We also propose a probabilistic framework, which determines the probability of a new routing pattern using previous patterns. We use simulated routing patterns built on the same data format as that of the San Francisco cab data to predict the possible routes to be taken by a vehicle. All the probability calculations performed are done using Bayes’ theorem of conditional probability formula

    Probabilistic Model To Identify Movement Patterns In Geospatial Data

    Get PDF
    The task of trying to determine the movement pattern of objects based on available databases is a daunting one. Tracking the movement of these dynamic objects is important in different areas to understand the higher order patterns of movement that carry special meaning for a target application. However this is still a largely unsolved problem and recent work has focused on the relationships of moving point objects with stationary objects or landmarks on a map. Global Position System (GPS) is a widely used satellite-based navigation system. Popular use of these devices has produced large collections of data, some of which have been archived. These archived data sets and sometimes real time GPS data are now readily available over the internet and their analysis through computational methods can generate meaningful insights. These insights when applied appropriately can be used in everyday life. The purpose of this research is to make the case that automated analysis can provide insight that can otherwise be difficult to achieve due to the large volume and noisy characteristics of GPS data. We present experiments that have been performed on one of these archived databases which contain GPS traces of 536 yellow cabs in the San Francisco Bay area. Using data analysis, we determine the most visited tourist destinations within the San Francisco Bay area during the time period of the captured data. We also propose a probabilistic framework, which determines the probability of a new routing pattern using previous patterns. We use simulated routing patterns built on the same data format as that of the San Francisco cab data to predict the possible routes to be taken by a vehicle. All the probability calculations performed are done using Bayes’ theorem of conditional probability formula

    Geospatial queries on data collection using a common provenance model

    Get PDF
    Altres ajuts: Xavier Pons is the recipient of an ICREA Academia Excellence in Research Grant (2016-2020)Lineage information is the part of the metadata that describes "what", "when", "who", "how", and "where" geospatial data were generated. If it is well-presented and queryable, lineage becomes very useful information for inferring data quality, tracing error sources and increasing trust in geospatial information. In addition, if the lineage of a collection of datasets can be related and presented together, datasets, process chains, and methodologies can be compared. This paper proposes extending process step lineage descriptions into four explicit levels of abstraction (process run, tool, algorithm and functionality). Including functionalities and algorithm descriptions as a part of lineage provides high-level information that is independent from the details of the software used. Therefore, it is possible to transform lineage metadata that is initially documenting specific processing steps into a reusable workflow that describes a set of operations as a processing chain. This paper presents a system that provides lineage information as a service in a distributed environment. The system is complemented by an integrated provenance web application that is capable of visualizing and querying a provenance graph that is composed by the lineage of a collection of datasets. The International Organization for Standardization (ISO) 19115 standards family with World Wide Web Consortium (W3C) provenance initiative (W3C PROV) were combined in order to integrate provenance of a collection of datasets. To represent lineage elements, the ISO 19115-2 lineage class names were chosen, because they express the names of the geospatial objects that are involved more precisely. The relationship naming conventions of W3C PROV are used to represent relationships among these elements. The elements and relationships are presented in a queryable graph

    A parametric prototype for spatiotemporal databases

    Get PDF
    The main goal of this project is to design and implement the parametric database (ParaDB). Conceptually, ParaDB consists of the parametric data model (ParaDM) and the parametric structured query language (ParaSQL). Parametric data model is a data model for multi-dimensional databases such as temporal, spatial, spatiotemporal, or multi-level secure databases. Main difference compared to the classical relational data model is that ParaDM models an object as a single tuple, and an attribute is defined as a function from parametric elements. The set of parametric elements is closed under union, intersection, and complementation. These operations are counterparts of or, and, and not in a natural language like English. Therefore, the closure properties provide very flexible ways to query on objects without introducing additional self-join operations which are frequently required in other multi-dimensional database models
    corecore