511 research outputs found

    Bridging the gap between algorithmic and learned index structures

    Get PDF
    Index structures such as B-trees and bloom filters are the well-established petrol engines of database systems. However, these structures do not fully exploit patterns in data distribution. To address this, researchers have suggested using machine learning models as electric engines that can entirely replace index structures. Such a paradigm shift in data system design, however, opens many unsolved design challenges. More research is needed to understand the theoretical guarantees and design efficient support for insertion and deletion. In this thesis, we adopt a different position: index algorithms are good enough, and instead of going back to the drawing board to fit data systems with learned models, we should develop lightweight hybrid engines that build on the benefits of both algorithmic and learned index structures. The indexes that we suggest provide the theoretical performance guarantees and updatability of algorithmic indexes while using position prediction models to leverage the data distributions and thereby improve the performance of the index structure. We investigate the potential for minimal modifications to algorithmic indexes such that they can leverage data distribution similar to how learned indexes work. In this regard, we propose and explore the use of helping models that boost classical index performance using techniques from machine learning. Our suggested approach inherits performance guarantees from its algorithmic baseline index, but at the same time it considers the data distribution to improve performance considerably. We study single-dimensional range indexes, spatial indexes, and stream indexing, and show that the suggested approach results in range indexes that outperform the algorithmic indexes and have comparable performance to the read-only, fully learned indexes and hence can be reliably used as a default index structure in a database engine. Besides, we consider the updatability of the indexes and suggest solutions for updating the index, notably when the data distribution drastically changes over time (e.g., for indexing data streams). In particular, we propose a specific learning-augmented index for indexing a sliding window with timestamps in a data stream. Additionally, we highlight the limitations of learned indexes for low-latency lookup on real- world data distributions. To tackle this issue, we suggest adding an algorithmic enhancement layer to a learned model to correct the prediction error with a small memory latency. This approach enables efficient modelling of the data distribution and resolves the local biases of a learned model at the cost of roughly one memory lookup.Open Acces

    Implementation for spatial data of the shared nearest neighbour with metric data structures

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Informátic

    Survey of Vector Database Management Systems

    Full text link
    There are now over 20 commercial vector database management systems (VDBMSs), all produced within the past five years. But embedding-based retrieval has been studied for over ten years, and similarity search a staggering half century and more. Driving this shift from algorithms to systems are new data intensive applications, notably large language models, that demand vast stores of unstructured data coupled with reliable, secure, fast, and scalable query processing capability. A variety of new data management techniques now exist for addressing these needs, however there is no comprehensive survey to thoroughly review these techniques and systems. We start by identifying five main obstacles to vector data management, namely vagueness of semantic similarity, large size of vectors, high cost of similarity comparison, lack of natural partitioning that can be used for indexing, and difficulty of efficiently answering hybrid queries that require both attributes and vectors. Overcoming these obstacles has led to new approaches to query processing, storage and indexing, and query optimization and execution. For query processing, a variety of similarity scores and query types are now well understood; for storage and indexing, techniques include vector compression, namely quantization, and partitioning based on randomization, learning partitioning, and navigable partitioning; for query optimization and execution, we describe new operators for hybrid queries, as well as techniques for plan enumeration, plan selection, and hardware accelerated execution. These techniques lead to a variety of VDBMSs across a spectrum of design and runtime characteristics, including native systems specialized for vectors and extended systems that incorporate vector capabilities into existing systems. We then discuss benchmarks, and finally we outline research challenges and point the direction for future work.Comment: 25 page

    An Effective Approach to Predicting Large Dataset in Spatial Data Mining Area

    Get PDF
    Due to enormous quantities of spatial satellite images, telecommunication images, health related tools etc., it is often impractical for users to have detailed and thorough examination of spatial data (S). Large dataset is very common and pervasive in a number of application areas. Discovering or predicting patterns from these datasets is very vital. This research focused on developing new methods, models and techniques for accomplishing advanced spatial data mining (ASDM) tasks. The algorithms were designed to challenge state-of-the-art data technologies and they are tested with randomly generated and actual real-world data. Two main approaches were adopted to achieve the objectives (1) identifying the actual data types (DTs), data structures and spatial content of a given dataset (to make our model versatile and robust) and (2) integrating these data types into an appropriate database management system (DBMS) framework, for easy management and manipulation. These two approaches helped to discover the general and varying types of patterns that exist within any given dataset non-spatial, spatial or even temporal (because spatial data are always influenced by temporal agents) datasets. An iterative method was adopted for system development methodology in this study. The method was adopted as a strategy to combat the irregularity that often exists within spatial datasets. In the course of this study, some of the challenges we encountered which also doubled as current challenges facing spatial data mining includes: (a) time complexity in availing useful data for analysis, (b) time complexity in loading data to storage and (c) difficulties in discovering spatial, non-spatial and temporal correlations between different data objects. However, despite the above challenges, there are some opportunities that spatial data can benefit from including: Cloud computing, Spark technology, Parallelisation, and Bulk-loading methods. Techniques and application areas of spatial data mining (SDM) were identified and their strength and limitations were equally documented. Finally, new methods and algorithms for mining very large data of spatial/non-spatial bias were created. The proposed models/systems are documented in the sections as follows: (a) Development of a new technique for parallel indexing of large dataset (PaX-DBSCAN), (b) Development of new techniques for clustering (X-DBSCAN) in a learning process, (c) Development of a new technique for detecting human skin in an image, (d) Development of a new technique for finding face in an image, (e) Development of a novel technique for management of large spatial and non-spatial datasets (aX-tree). The most prominent among our methods is the new structure used in (c) above -- packed maintained k-dimensional tree (Pmkd-tree), for fast spatial indexing and querying. The structure is a combination system that combines all the proposed algorithms to produce one solid, standard, useful and quality system. The intention of the new final algorithm (system) is to combine the entire initial proposed algorithms to come up with one strong generic effective tool for predicting large dataset SDM area, which it is capable of finding patterns that exist among spatial or non-spatial objects in a DBMS. In addition to Pmkd-tree, we also implemented a novel spatial structure, packed quad-tree (Pquad-Tree), to balance and speed up the performance of the regular quad-tree. Our systems so far have shown a manifestation of efficiency in terms of performance, storage and speed. The final Systems (Pmkd-tree and Pquad-Tree) are generic systems that are flexible, robust, light and stable. They are explicit spatial models for analysing any given problem and for predicting objects as spatially distributed events, using basic SDM algorithms. They can be applied to pattern matching, image processing, computer vision, bioinformatics, information retrieval, machine learning (classification and clustering) and many other computational tasks

    Optimised meta-clustering approach for clustering Time Series Matrices

    Get PDF
    The prognostics (health state) of multiple components represented as time series data stored in vectors and matrices were processed and clustered more effectively and efficiently using the newly devised ‘Meta-Clustering’ approach. These time series data gathered from large applications and systems in diverse fields such as communication, medicine, data mining, audio, visual applications, and sensors. The reason time series data was used as the domain of this research is that meaningful information could be extracted regarding the characteristics of systems and components found in large applications. Also when it came to clustering, only time series data would allow us to group these data according to their life cycle, i.e. from the time which they were healthy until the time which they start to develop faults and ultimately fail. Therefore by proposing a technique that can better process extracted time series data would significantly cut down on space and time consumption which are both crucial factors in data mining. This approach will, as a result, improve the current state of the art pattern recognition algorithms such as K-NM as the clusters will be identified faster while consuming less space. The project also has application implications in the sense that by calculating the distance between the similar components faster while also consuming less space means that the prognostics of multiple components clustered can be realised and understood more efficiently. This was achieved by using the Meta-Clustering approach to process and cluster the time series data by first extracting and storing the time series data as a two-dimensional matrix. Then implementing an enhance K-NM clustering algorithm based on the notion of Meta-Clustering and using the Euclidean distance tool to measure the similarity between the different set of failure patterns in space. This approach would initially classify and organise each component within its own refined individual cluster. This would provide the most relevant set of failure patterns that show the highest level of similarity and would also get rid of any unnecessary data that adds no value towards better understating the failure/health state of the component. Then during the second stage, once these clusters were effectively obtained, the following inner clusters initially formed are thereby grouped into one general cluster that now represents the prognostics of all the processed components. The approach was tested on multivariate time series data extracted from IGBT components within Matlab and the results achieved from this experiment showed that the optimised Meta-Clustering approach proposed does indeed consume less time and space to cluster the prognostics of IGBT components as compared to existing data mining techniques

    Clustering in the Big Data Era: methods for efficient approximation, distribution, and parallelization

    Get PDF
    Data clustering is an unsupervised machine learning task whose objective is to group together similar items. As a versatile data mining tool, data clustering has numerous applications, such as object detection and localization using data from 3D laser-based sensors, finding popular routes using geolocation data, and finding similar patterns of electricity consumption using smart meters.The datasets in modern IoT-based applications are getting more and more challenging for conventional clustering schemes. Big Data is a term used to loosely describe hard-to-manage datasets. Particularly, large numbers of data points, high rates of data production, large numbers of dimensions, high skewness, and distributed data sources are aspects that challenge the classical data processing schemes, including clustering methods. This thesis contributes to efficient big data clustering for distributed and parallel computing architectures, representative of the processing environments in edge-cloud computing continuum. The thesis also proposes approximation techniques to cope with certain challenging aspects of big data.Regarding distributed clustering, the thesis proposes MAD-C, abbreviating Multi-stage Approximate Distributed Cluster-Combining. MAD-C leverages an approximation-based data synopsis that drastically lowers the required communication bandwidth among the distributed nodes and achieves multiplicative savings in computation time, compared to a baseline that centrally gathers and clusters the data. The thesis shows MAD-C can be used to detect and localize objects using data from distributed 3D laser-based sensors with high accuracy. Furthermore, the work in the thesis shows how to utilize MAD-C to efficiently detect the objects within a restricted area for geofencing purposes.Regarding parallel clustering, the thesis proposes a family of algorithms called PARMA-CC, abbreviating Parallel Multistage Approximate Cluster Combining. Using approximation-based data synopsis, PARMA-CC algorithms achieve scalability on multi-core systems by facilitating parallel execution of threads with limited dependencies which get resolved using fine-grained synchronization techniques. To further enhance the efficiency, PARMA-CC algorithms can be configured with respect to different data properties. Analytical and empirical evaluations show PARMA-CC algorithms achieve significantly higher scalability than the state-of-the-art methods while preserving a high accuracy.On parallel high dimensional clustering, the thesis proposes IP.LSH.DBSCAN, abbreviating Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing (LSH). IP.LSH.DBSCAN fuses the process of creating an LSH index into the process of data clustering, and it takes advantage of data parallelization and fine-grained synchronization. Analytical and empirical evaluations show IP.LSH.DBSCAN facilitates parallel density-based clustering of massive datasets using desired distance measures resulting in several orders of magnitude lower latency than state-of-the-art for high dimensional data.In essence, the thesis proposes methods and algorithmic implementations targeting the problem of big data clustering and applications using distributed and parallel processing. The proposed methods (available as open source software) are extensible and can be used in combination with other methods

    Predictive statistical user models under the collaborative approach

    Get PDF
    Mención Internacional en el título de doctorUser models and recommender systems due to their similarity can be considered the same thing except from the use that we make of them. Both have their root in multiple disciplines such as information retrieval or machine learning among others. The impact has grown rapidly with the importance of data on systems and applications. Most of the big companies employ one of the other for different reasons such as: gathering more customers, boost sales or increase revenue. Thus very well-known companies like Amazon, EBay or Google use models to improve their businesses. In fact, as data becomes more and more important for companies, universities and people, user models are crucial to make decisions over large amounts of data. Although user models can provide accurate predictions on large populations their use and application is not restricted to predictions but can be extended to selection of dialogue strategies or detection of communities within complex domains. After a deep review of the existing literature, it was found that there is a lack of statistical user models based on experience plus the existing models in the area are content-based models that suffer from major problems as scalability, cold-start or new user problem. Furthermore, researchers in the area of user modelling usually develop their own models and then perform ad-hoc evaluations that are not replicable and therefore not comparable. The lack of a complete framework for evaluation makes very difficult to compare results across models and domains. There are two main approaches to build a user model or recommender system: the content based approach, where predictions are based on the same user past behaviours; and the collaborative approach where predictions rely on like-minded people. Both approaches have advantages but also downsides that have to be considered before building a model. The main goal of this thesis is to develop a hybrid user model that takes the strengths of both approaches and mitigates the downsides by combining both methods. The proposed hybrid model is based on an R-Tree structure. The selection of this structure to support the models is backed from the fact that the rectangle tree is specifically designed to effectively store and manipulate multidimensional data. This data structure introduced by Guttman in 1984 is a height balanced tree that only requires visiting a few nodes to perform a tree search. As a result, it can manage large populations of data efficiently as only a few nodes are visited during the inference. R-Tree has two different typologies of nodes: the leaf-node and the non-leaf node. Leaf nodes contain the whole universe of users while non leaf nodes are somehow redundant and contain summaries of child nodes. Along this thesis two statistical user models based on experience have been proposed. The first one is a knowledge base user mode (KLUM), is a classical approach that summarizes and remove data in order to keep performance level within reasonable margins. The second one, an R-Tree user model (RTUM), is an innovative model based on an R-Tree structure. This new model not only solves the problem of removing data but also the scalability problem which turns out to be one of the major problems in the area of user modelling. Both models have been developed and tested with equivalent formulations to make comparisons relevant. Both models are prepared to create their own knowledge base from scratch but also they can be fed with expert knowledge. Thus alleviating another major problem in the area of user modelling as it is the start-up problem. Regarding the proposal of this thesis, two statistical user models are proposed (KLUM and RTUM). In addition, a refinement of RTUM user model is proposed, while RTUM performs node partitions based on the centroids of the users in that node, the new refinement implements a new partition based on privileged features. Hence, the new approach takes advantage of most discriminatory features of the domain to perform the partition. This new approach not only provides accurate inferences, but also an excellent clustering that can be useful in many different scenarios. For instance, this clustering can be employed in the area of social networks to detect communities within the social network. This is a tough task that has been one of the goals of many researchers during the last few years. This thesis also provides a complete evaluation of the models with a great diversity of parameterizations and domains. The models are tested in four different domains and as a result of the evaluation, it is proved that RTUM user model provides a massive gain against classical user models as KLUM. During the evaluation, RTUM reached success rates of 85% while the analogous KLUM could only reach a 65% thus leaving a 20% gain for the proposed model. The evaluation provided not only compares models and success rates, but also provides a broad analysis of how every parameter of the models impact the performance plus a complete study of the databases sizes and inference times for the models. The main conclusion to the evaluation is that after a complete evaluation with a wide diversity of parameters and domains RTUM outperforms KLUM on every scenario tested. As previously mentioned, after the literature review it was also found a lack of evaluation frameworks for user modelling. This thesis also provides a complete evaluation framework for user modelling. This fills a gap in the literature as well as makes the evaluation replicable and therefore comparable. Along years researchers and developers had found difficulties to compare evaluations and measure the quality of their models in different domains due to the lack of an evaluation standard. The evaluation framework presented in this thesis covers data samples including training set and test set plus different sets of experiments alongside with a statistical analysis of the domain, confidence intervals and confidence levels to guarantee that each experiment is statistically significant. The evaluation framework can be downloaded and then used to complete evaluations and cross-validate results across different models.This thesis would not have been possible without the financial support of the following research projects Cadooh (TSI-020302-2011-21), Thuban (TIN2008-02711) that funded part of this research.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Antonio de Amescua Seco.- Secretario: Ruth Cobos Pérez.- Vocal: Dominikus Heckman

    Embedding Techniques to Solve Large-scale Entity Resolution

    Get PDF
    Entity resolution (ER) identifies and links records that belong to the same real-world entities, where an entity refer to any real-world object. It is a primary task in data integration. Accurate and efficient ER substantially impacts various commercial, security, and scientific applications. Often, there are no unique identifiers for entities in datasets/databases that would make the ER task easy. Therefore record matching depends on entity identifying attributes and approximate matching techniques. The issues of efficiently handling large-scale data remain an open research problem with the increasing volumes and velocities in modern data collections. Fast, scalable, real-time and approximate entity matching techniques that provide high-quality results are highly demanding. This thesis proposes solutions to address the challenges of lack of test datasets and the demand for fast indexing algorithms in large-scale ER. The shortage of large-scale, real-world datasets with ground truth is a primary concern in developing and testing new ER algorithms. Usually, for many datasets, there is no information on the ground truth or ‘gold standard’ data that specifies if two records correspond to the same entity or not. Moreover, obtaining test data for ER algorithms that use personal identifying keys (e.g., names, addresses) is difficult due to privacy and confidentiality issues. Towards this challenge, we proposed a numerical simulation model that produces realistic large-scale data to test new methods when suitable public datasets are unavailable. One of the important findings of this work is the approximation of vectors that represent entity identification keys and their relationships, e.g., dissimilarities and errors. Indexing techniques reduce the search space and execution time in the ER process. Based on the ideas of the approximate vectors of entity identification keys, we proposed a fast indexing technique (Em-K indexing) suitable for real-time, approximate entity matching in large-scale ER. Our Em-K indexing method provides a quick and accurate block of candidate matches for a querying record by searching an existing reference database. All our solutions are metric-based. We transform metric or non-metric spaces to a lowerdimensional Euclidean space, known as configuration space, using multidimensional scaling (MDS). This thesis discusses how to modify MDS algorithms to solve various ER problems efficiently. We proposed highly efficient and scalable approximation methods that extend the MDS algorithm for large-scale datasets. We empirically demonstrate the improvements of our proposed approaches on several datasets with various parameter settings. The outcomes show that our methods can generate large-scale testing data, perform fast real-time and approximate entity matching, and effectively scale up the mapping capacity of MDS.Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 202

    Data Clustering: Algorithms and Its Applications

    Get PDF
    Data is useless if information or knowledge that can be used for further reasoning cannot be inferred from it. Cluster analysis, based on some criteria, shares data into important, practical or both categories (clusters) based on shared common characteristics. In research, clustering and classification have been used to analyze data, in the field of machine learning, bioinformatics, statistics, pattern recognition to mention a few. Different methods of clustering include Partitioning (K-means), Hierarchical (AGNES), Density-based (DBSCAN), Grid-based (STING), Soft clustering (FANNY), Model-based (SOM) and Ensemble clustering. Challenges and problems in clustering arise from large datasets, misinterpretation of results and efficiency/performance of clustering algorithms, which is necessary for choosing clustering algorithms. In this paper, application of data clustering was systematically discussed in view of the characteristics of the different clustering techniques that make them better suited or biased when applied to several types of data, such as uncertain data, multimedia data, graph data, biological data, stream data, text data, time series data, categorical data and big data. The suitability of the available clustering algorithms to different application areas was presented. Also investigated were some existing cluster validity methods used to evaluate the goodness of the clusters produced by the clustering algorithms
    • …
    corecore