629 research outputs found

    Challenging Issues of Spatio-Temporal Data Mining

    Get PDF
    The spatio-temporal database (STDB) has received considerable attention during the past few years, due to the emergence of numerous applications (e.g., flight control systems, weather forecast, mobile computing, etc.) that demand efficient management of moving objects. These applications record objects' geographical locations (sometimes also shapes) at various timestamps and support queries that explore their historical and future (predictive) behaviors. The STDB significantly extends the traditional spatial database, which deals with only stationary data and hence is inapplicable to moving objects, whose dynamic behavior requires re-investigation of numerous topics including data modeling, indexes, and the related query algorithms. In many application areas, huge amounts of data are generated, explicitly or implicitly containing spatial or spatiotemporal information. However, the ability to analyze these data remains inadequate, and the need for adapted data mining tools becomes a major challenge. In this paper, we have presented the challenging issues of spatio-temporal data mining. Keywords: database, data mining, spatial, temporal, spatio-tempora

    Efficient MaxCount and threshold operators of moving objects

    Get PDF
    Calculating operators of continuously moving objects presents some unique challenges, especially when the operators involve aggregation or the concept of congestion, which happens when the number of moving objects in a changing or dynamic query space exceeds some threshold value. This paper presents the following six d-dimensional moving object operators: (1) MaxCount (or MinCount), which finds the Maximum (or Minimum) number of moving objects simultaneously present in the dynamic query space at any time during the query time interval. (2) CountRange, which finds a count of point objects whose trajectories intersect the dynamic query space during the query time interval. (3) ThresholdRange, which finds the set of time intervals during which the dynamic query space is congested. (4) ThresholdSum, which finds the total length of all the time intervals during which the dynamic query space is congested. (5) ThresholdCount, which finds the number of disjoint time intervals during which the dynamic query space is congested. And (6) ThresholdAverage, which finds the average length of time of all the time intervals when the dynamic query space is congested. For these operators separate algorithms are given to find only estimate or only precise values. Experimental results from more than 7,500 queries indicate that the estimation algorithms produce fast, efficient results with error under 5%

    QuickSel: Quick Selectivity Learning with Mixture Models

    Full text link
    Estimating the selectivity of a query is a key step in almost any cost-based query optimizer. Most of today's databases rely on histograms or samples that are periodically refreshed by re-scanning the data as the underlying data changes. Since frequent scans are costly, these statistics are often stale and lead to poor selectivity estimates. As an alternative to scans, query-driven histograms have been proposed, which refine the histograms based on the actual selectivities of the observed queries. Unfortunately, these approaches are either too costly to use in practice---i.e., require an exponential number of buckets---or quickly lose their advantage as they observe more queries. In this paper, we propose a selectivity learning framework, called QuickSel, which falls into the query-driven paradigm but does not use histograms. Instead, it builds an internal model of the underlying data, which can be refined significantly faster (e.g., only 1.9 milliseconds for 300 queries). This fast refinement allows QuickSel to continuously learn from each query and yield increasingly more accurate selectivity estimates over time. Unlike query-driven histograms, QuickSel relies on a mixture model and a new optimization algorithm for training its model. Our extensive experiments on two real-world datasets confirm that, given the same target accuracy, QuickSel is 34.0x-179.4x faster than state-of-the-art query-driven histograms, including ISOMER and STHoles. Further, given the same space budget, QuickSel is 26.8%-91.8% more accurate than periodically-updated histograms and samples, respectively

    Dynamic-parinet (D-parinet) : indexing present and future trajectories in networks

    Get PDF
    While indexing historical trajectories is a hot topic in the field of moving objects (MO) databases for many years, only a few of them consider that the objects movements are constrained. DYNAMIC-PARINET (D-PATINET) is designed for capturing of trajectory data flow in multiple discrete small time interval efficiently and to predict a MO’s movement or the underlying network state at a future time. The cornerstone of D-PARINET is PARINET, an efficient index for historical trajectory data. The structure of PARINET is based on a combination of graph partitioning and a set of composite B+-tree local indexes tuned for a given query load and a given data distribution in the network space. D-PARINET studies continuous update of trajectory data and use interpolation to predict future MO movement in the network. PARINET and D-PARINET can easily be integrated into any RDBMS, which is an essential asset particularly for industrial or commercial applications. The experimental evaluation under an off-the-shelf DBMS using simulated traffic data shows that DPARINET is robust and significantly outperforms the R-tree based access methods

    In-Memory Trajectory Indexing for On-The-Fly Travel-Time Estimation

    Get PDF

    Advance of the Access Methods

    Get PDF
    The goal of this paper is to outline the advance of the access methods in the last ten years as well as to make review of all available in the accessible bibliography methods

    Knowledge-centric Analytics Queries Allocation in Edge Computing Environments

    Get PDF
    The Internet of Things involves a huge number of devices that collect data and deliver them to the Cloud. The processing of data at the Cloud is characterized by increased latency in providing responses to analytics queries defined by analysts or applications. Hence, Edge Computing (EC) comes into the scene to provide data processing close to the source. The collected data can be stored in edge devices and queries can be executed there to reduce latency. In this paper, we envision a case where entities located in the Cloud undertake the responsibility of receiving analytics queries and decide on the most appropriate edge nodes for queries execution. The decision is based on statistical signatures of the datasets of nodes and the statistical matching between statistics and analytics queries. Edge nodes regularly update their statistical signatures to support such decision process. Our performance evaluation shows the advantages and the shortcomings of our proposed schema in edge computing environments
    • …
    corecore