14 research outputs found

    Finding dense locations in indoor tracking data

    Get PDF

    How to manage massive spatiotemporal dataset from stationary and non-stationary sensors in commercial DBMS?

    Get PDF
    The growing diffusion of the latest information and communication technologies in different contexts allowed the constitution of enormous sensing networks that form the underlying texture of smart environments. The amount and the speed at which these environments produce and consume data are starting to challenge current spatial data management technologies. In this work, we report on our experience handling real-world spatiotemporal datasets: a stationary dataset referring to the parking monitoring system and a non-stationary dataset referring to a train-mounted railway monitoring system. In particular, we present the results of an empirical comparison of the retrieval performances achieved by three different off-the-shelf settings to manage spatiotemporal data, namely the well-established combination of PostgreSQL + PostGIS with standard indexing, a clustered version of the same setup, and then a combination of the basic setup with Timescale, a storage extension specialized in handling temporal data. Since the non-stationary dataset has put much pressure on the configurations above, we furtherly investigated the advantages achievable by combining the TSMS setup with state-of-the-art indexing techniques. Results showed that the standard indexing is by far outperformed by the other solutions, which have different trade-offs. This experience may help researchers and practitioners facing similar problems managing these types of data

    Estado actual de las tecnologías de bodega de datos y olap aplicadas a bases de datos espaciales

    Get PDF
    Las organizaciones requieren de una información oportuna, dinámica, amigable, centralizada y de fácil acceso para analizar y tomar decisiones acertadas y correctas en el momento preciso. La centralización se logra con la tecnología de bodega de datos. El análisis lo proporcionan los sistemas de procesamiento analítico en línea, OLAP (On Line Analytical Processing). Y en la presentación de los datos se pueden aprovechar tecnologías que usen gráficos y mapas para tener una visión global de la compañía y así tomar mejores decisiones. Aquí son útiles los sistemas de información geográfica, SIG, que están diseñados para ubicar espacialmente la información y representarla por medio de mapas. Las bodegas de datos generalmente se implementan con el modelo multidimensional para facilitar los análisis con OLAP. Uno de los puntos fundamentales de este modelo es la definición de medidas y de dimensiones, entre las cuales está la geografía. Diversos investigadores del tema han concluido que en los sistemas de análisis actuales la dimensión geográfica es un atributo más que describe los datos, pero sin profundizar en su parte espacial y sin ubicarlos en un mapa, como si se hace en los SIG. Visto de esa manera, es necesaria la interoperabilidad entre SIG y OLAP (que ha recibido el nombre de Spatial OLAP o SOLAP) y diversas entidades han adelantado varios trabalos de investigación para lograrla.Organisations require their information on a timely, dynamic, friendly, centralised and easy-to-access basis for analysing it and taking correct decisions at the right time. Centralisation can be achieved with data warehouse technology. On-line analytical processing (OLAP) is used for analysis. Technologies using graphics and maps in data presentation can be exploited for an overall view of a company and helping to take better decisions. Geo- graphic information systems (GIS) are useful for spatially locating information and representing it using maps. Data warehouses are generally implemented with a multidimensional data model to make OLAP analysis easier. A fundamental point in this model is the definition of measurements and dimensions; geography lies within such dimensions. Many researchers have concluded that the geographic dimension is another attribute for describing data in current analysis systems but without having an in-depth study of its spatial feature and without locating them on a map, like GIS does. Seen this way, interoperability is necessary between GIS and OLAP (called spatial OLAP or SOLAP) and several entities are currently researching this. This document summarises the current status of such research

    Improved pattern extraction scheme for clustering multidimensional data

    Get PDF
    Multidimensional data refers to data that contains at least three attributes or dimensions. The availability of huge amount of multidimensional data that has been collected over the years has greatly challenged the ability to digest the data and to gain useful knowledge that would otherwise be lost. Clustering technique has enabled the manipulation of this knowledge to gain an interesting pattern analysis that could benefit the relevant parties. In this study, three crucial challenges in extracting the pattern of the multidimensional data are highlighted: the dimension of huge multidimensional data requires efficient exploration method for the pattern extraction, the need for better mechanisms to test and validate clustering results and the need for more informative visualization to interpret the “best” clusters. Densitybased clustering algorithms such as density-based spatial clustering application with noise (DBSCAN), density clustering (DENCLUE) and kernel fuzzy C-means (KFCM) that use probabilistic similarity function have been introduced by previous works to determine the number of clusters automatically. However, they have difficulties in dealing with clusters of different densities, shapes and size. In addition, they require many parameter inputs that are difficult to determine. Kernel-nearestneighbor (KNN)-density-based clustering including kernel-nearest-neighbor-based clustering (KNNClust) has been proposed to solve the problems of determining smoothing parameters for multidimensional data and to discover cluster with arbitrary shape and densities. However, KNNClust faces problem on clustering data with different size. Therefore, this research proposed a new pattern extraction scheme integrating triangular kernel function and local average density technique called TKC to improve KNN-density-based clustering algorithm. The improved scheme has been validated experimentally with two scenarios: using real multidimensional spatio-temporal data and using various classification datasets. Four different measurements were used to validate the clustering results; Dunn and Silhouette index to assess the quality, F-measure to evaluate the performance of approach in terms of accuracy, ANOVA test to analyze the cluster distribution, and processing time to measure the efficiency. The proposed scheme was benchmarked with other well-known clustering methods including KNNClust, Iterative Local Gaussian Clustering (ILGC), basic k-means, KFCM, DBSCAN and DENCLUE. The results on the classification dataset demonstrated that TKC produced clusters with higher accuracy and more efficient than other clustering methods. In addition, the analysis of the results showed that the proposed TKC scheme is capable of handling multidimensional data, validated by Silhouette and Dunn index which was close to one, indicating reliable results
    corecore