324 research outputs found
Mining Novel Multivariate Relationships in Time Series Data Using Correlation Networks
In many domains, there is significant interest in capturing novel
relationships between time series that represent activities recorded at
different nodes of a highly complex system. In this paper, we introduce
multipoles, a novel class of linear relationships between more than two time
series. A multipole is a set of time series that have strong linear dependence
among themselves, with the requirement that each time series makes a
significant contribution to the linear dependence. We demonstrate that most
interesting multipoles can be identified as cliques of negative correlations in
a correlation network. Such cliques are typically rare in a real-world
correlation network, which allows us to find almost all multipoles efficiently
using a clique-enumeration approach. Using our proposed framework, we
demonstrate the utility of multipoles in discovering new physical phenomena in
two scientific domains: climate science and neuroscience. In particular, we
discovered several multipole relationships that are reproducible in multiple
other independent datasets and lead to novel domain insights.Comment: This is the accepted version of article submitted to IEEE
Transactions on Knowledge and Data Engineering 201
Retrieving Top-N Weighted Spatial k-cliques
Spatial data analysis is a classic yet important topic because of its wide range of applications. Recently, as a spatial data analysis approach, a neighbor graph of a set P of spatial points has often been employed. This paper also considers a spatial neighbor graph and addresses a new problem, namely top-N weighted spatial k-clique retrieval. This problem searches for the N minimum weighted cliques consisting of k points in P, and it has important applications, such as community detection and co-location pattern mining. Recent spatial datasets have many points, and efficiently dealing with such big datasets is one of the main requirements of applications. A straightforward approach to solving our problem is to try to enumerate all k-cliques, which incurs O(nkk2) time. Since k ≥ 3, this approach cannot achieve the main requirement, so computing the result without enumerating unnecessary k-cliques is required. This paper achieves this challenging task and proposes a simple practically-efficient algorithm that returns the exact answer. We conduct experiments using two real spatial datasets consisting of million points, and the results show the efficiency of our algorithm, e.g., it can return the exact top-N result within 1 second when N ≤ 1000 and k ≤ 7.Taniguchi R., Amagata D., Hara T.. Retrieving Top-N Weighted Spatial k-cliques. Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022 , 4952 (2022); https://doi.org/10.1109/BigData55660.2022.10021071
NEW METHODS FOR MINING SEQUENTIAL AND TIME SERIES DATA
Data mining is the process of extracting knowledge from large amounts of data. It covers a variety of techniques aimed at discovering diverse types of patterns on the basis of the requirements of the domain. These techniques include association rules mining, classification, cluster analysis and outlier detection. The availability of applications that produce massive amounts of spatial, spatio-temporal (ST) and time series data (TSD) is the rationale for developing specialized techniques to excavate such data. In spatial data mining, the spatial co-location rule problem is different from the association rule problem, since there is no natural notion of transactions in spatial datasets that are embedded in continuous geographic space. Therefore, we have proposed an efficient algorithm (GridClique) to mine interesting spatial co-location patterns (maximal cliques). These patterns are used as the raw transactions for an association rule mining technique to discover complex co-location rules. Our proposal includes certain types of complex relationships – especially negative relationships – in the patterns. The relationships can be obtained from only the maximal clique patterns, which have never been used until now. Our approach is applied on a well-known astronomy dataset obtained from the Sloan Digital Sky Survey (SDSS). ST data is continuously collected and made accessible in the public domain. We present an approach to mine and query large ST data with the aim of finding interesting patterns and understanding the underlying process of data generation. An important class of queries is based on the flock pattern. A flock is a large subset of objects moving along paths close to each other for a predefined time. One approach to processing a “flock query” is to map ST data into high-dimensional space and to reduce the query to a sequence of standard range queries that can be answered using a spatial indexing structure; however, the performance of spatial indexing structures rapidly deteriorates in high-dimensional space. This thesis sets out a preprocessing strategy that uses a random projection to reduce the dimensionality of the transformed space. We use probabilistic arguments to prove the accuracy of the projection and to present experimental results that show the possibility of managing the curse of dimensionality in a ST setting by combining random projections with traditional data structures. In time series data mining, we devised a new space-efficient algorithm (SparseDTW) to compute the dynamic time warping (DTW) distance between two time series, which always yields the optimal result. This is in contrast to other approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series: the more the similarity between the time series, the less space required to compute the DTW between them. Other techniques for speeding up DTW, impose a priori constraints and do not exploit similarity characteristics that may be present in the data. Our experiments demonstrate that SparseDTW outperforms these approaches. We discover an interesting pattern by applying SparseDTW algorithm: “pairs trading” in a large stock-market dataset, of the index daily prices from the Australian stock exchange (ASX) from 1980 to 2002
Co-movement Pattern Mining from Videos
Co-movement pattern mining from GPS trajectories has been an intriguing
subject in spatial-temporal data mining. In this paper, we extend this research
line by migrating the data source from GPS sensors to surveillance cameras, and
presenting the first investigation into co-movement pattern mining from videos.
We formulate the new problem, re-define the spatial-temporal proximity
constraints from cameras deployed in a road network, and theoretically prove
its hardness. Due to the lack of readily applicable solutions, we adapt
existing techniques and propose two competitive baselines using Apriori-based
enumerator and CMC algorithm, respectively.
As the principal technical contributions, we introduce a novel index called
temporal-cluster suffix tree (TCS-tree), which performs two-level temporal
clustering within each camera and constructs a suffix tree from the resulting
clusters. Moreover, we present a sequence-ahead pruning framework based on
TCS-tree, which allows for the simultaneous leverage of all pattern constraints
to filter candidate paths. Finally, to reduce verification cost on the
candidate paths, we propose a sliding-window based co-movement pattern
enumeration strategy and a hashing-based dominance eliminator, both of which
are effective in avoiding redundant operations.
We conduct extensive experiments for scalability and effectiveness analysis.
Our results validate the efficiency of the proposed index and mining algorithm,
which runs remarkably faster than the two baseline methods. Additionally, we
construct a video database with 1169 cameras and perform an end-to-end pipeline
analysis to study the performance gap between GPS-driven and video-driven
methods. Our results demonstrate that the derived patterns from the
video-driven approach are similar to those derived from groundtruth
trajectories, providing evidence of its effectiveness
High Performance Large Graph Analytics by Enhancing Locality
Graphs are widely used in a variety of domains for representing entities and their relationship to each other. Graph analytics helps to understand, detect, extract and visualize insightful relationships between different entities. Graph analytics has a wide range of applications in various domains including computational biology, commerce, intelligence, health care and transportation. The breadth of problems that require large graph analytics is growing rapidly resulting in a need for fast and efficient graph processing.
One of the major challenges in graph processing is poor locality of reference. Locality of reference refers to the phenomenon of frequently accessing the same memory location or adjacent memory locations. Applications with poor data locality reduce the effectiveness of the cache memory. They result in large number of cache misses, requiring access to high latency main memory. Therefore, it is essential to have good locality for good performance. Most graph processing applications have highly random memory access patterns. Coupled with the current large sizes of the graphs, they result in poor cache utilization. Additionally, the computation to data access ratio in many graph processing applications is very low, making it difficult to cover the memory latency using computation. It is also challenging to efficiently parallelize most graph applications. Many graphs in real world have unbalanced degree distribution. It is difficult to achieve a balanced workload for such graphs. The parallelism in graph applications is generally fine-grained in nature. This calls for efficient synchronization and communication between the processing units.
Techniques for enhancing locality have been well studied in the context of regular applications like linear algebra. Those techniques are in most cases not applicable to the graph problems. In this dissertation, we propose two techniques for enhancing locality in graph algorithms: access transformation and task-set reduction. Access transformation can be applied to algorithms to improve the spatial locality by changing the random access pattern to sequential access. It is applicable to iterative algorithms that process random vertices/edges in each iteration. The task-set reduction technique can be applied to enhance the temporal locality. It is applicable to algorithms which repeatedly access the same data to perform certain task. Using the two techniques, we propose novel algorithms for three graph problems: k-core decomposition, maximal clique enumeration and triangle listing. We have implemented the algorithms. The results show that these algorithms provide significant improvement in performance and also scale well
- …