9 research outputs found

    Combinando semi-supervisão e hubness para aprimorar o agrupamento de dados em alta dimensão

    Get PDF
    The curse of dimensionality turns the high-dimensional data analysis a challenging task for data clustering techniques. Recent works have efficiently employed an aspect inherent to high-dimensional data in the proposal of clustering approaches guided by hubs which provide information about the distribution of the data instances among the K-nearest neighbors. Though, hubs can not well reflect the implicit semantics of the data, leading to an unsuitable data partition. In order to cope with both issues (i.e., high-dimensional data and meaningful clusters), this dissertation presents a clustering approach that explores the combination of two strategies: semi-supervision and density estimation based on hubness scores. The experimental results conducted with 23 real datasets show that the proposed approach has a good performance when applied on datasets with different characteristics.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorCNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoDissertação (Mestrado)A chamada maldição da dimensionalidade faz com que a análise de dados em alta dimensão seja uma tarefa desafiadora para técnicas de agrupamento de dados. Para tratar desta questão, trabalhos recentes têm empregado de forma eficiente um aspecto inerente de dados de alta dimensão na realização de processos de agrupamentos de dados. Esse aspecto, denominado hubness, consiste na tendência de algumas instâncias de dados, chamadas hubs, ocorrerem com maior frequência nas listas dos K-vizinhos mais próximos de outras instâncias. Contudo, os hubs podem não refletir a semântica implícita dos dados, levando a uma partição de dados inadequada. Esta dissertação apresenta uma abordagem de agrupamento que explora a combinação de duas estratégias: semi-supervisão e estimativa de densidade baseada em pontuações hubness. Os resultados dos experimentos realizados com 23 conjuntos de dados reais mostram que a abordagem proposta tem um desempenho superior quando aplicada em conjuntos de dados com características diferentes

    Ranking Interesting Subspaces for Clustering High Dimensional Data

    No full text
    Application domains such as life sciences, e.g. molecular biology produce a tremendous amount of data which can no longer be managed without the help of e#cient and e#ective data mining methods. One of the primary data mining tasks is clustering. However, traditional clustering algorithms often fail to detect meaningful clusters because of the high dimensional, inherently sparse feature space of most real-world data sets. Nevertheless, the data sets often contain clusters hidden in various subspaces of the original feature space. We present a pre-processing step for traditional clustering algorithms, which detects all interesting subspaces of high-dimensional data containing clusters. For this purpose, we define a quality criterion for the interestingness of a subspace and propose an e#cient algorithm called RIS (Ranking I nteresting Subspaces) to examine all such subspaces. A broad evaluation based on synthetic and real-world data sets empirically shows that RIS is suitable to find all relevant subspaces in large, high dimensional, sparse data and to rank them accordingly

    Estimating Dependency, Monitoring and Knowledge Discovery in High-Dimensional Data Streams

    Get PDF
    Data Mining – known as the process of extracting knowledge from massive data sets – leads to phenomenal impacts on our society, and now affects nearly every aspect of our lives: from the layout in our local grocery store, to the ads and product recommendations we receive, the availability of treatments for common diseases, the prevention of crime, or the efficiency of industrial production processes. However, Data Mining remains difficult when (1) data is high-dimensional, i.e., has many attributes, and when (2) data comes as a stream. Extracting knowledge from high-dimensional data streams is impractical because one must cope with two orthogonal sets of challenges. On the one hand, the effects of the so-called "curse of dimensionality" bog down the performance of statistical methods and yield to increasingly complex Data Mining problems. On the other hand, the statistical properties of data streams may evolve in unexpected ways, a phenomenon known in the community as "concept drift". Thus, one needs to update their knowledge about data over time, i.e., to monitor the stream. While previous work addresses high-dimensional data sets and data streams to some extent, the intersection of both has received much less attention. Nevertheless, extracting knowledge in this setting is advantageous for many industrial applications: identifying patterns from high-dimensional data streams in real-time may lead to larger production volumes, or reduce operational costs. The goal of this dissertation is to bridge this gap. We first focus on dependency estimation, a fundamental task of Data Mining. Typically, one estimates dependency by quantifying the strength of statistical relationships. We identify the requirements for dependency estimation in high-dimensional data streams and propose a new estimation framework, Monte Carlo Dependency Estimation (MCDE), that fulfils them all. We show that MCDE leads to efficient dependency monitoring. Then, we generalise the task of monitoring by introducing the Scaling Multi-Armed Bandit (S-MAB) algorithms, extending the Multi-Armed Bandit (MAB) model. We show that our algorithms can efficiently monitor statistics by leveraging user-specific criteria. Finally, we describe applications of our contributions to Knowledge Discovery. We propose an algorithm, Streaming Greedy Maximum Random Deviation (SGMRD), which exploits our new methods to extract patterns, e.g., outliers, in high-dimensional data streams. Also, we present a new approach, that we name kj-Nearest Neighbours (kj-NN), to detect outlying documents within massive text corpora. We support our algorithmic contributions with theoretical guarantees, as well as extensive experiments against both synthetic and real-world data. We demonstrate the benefits of our methods against real-world use cases. Overall, this dissertation establishes fundamental tools for Knowledge Discovery in high-dimensional data streams, which help with many applications in the industry, e.g., anomaly detection, or predictive maintenance. To facilitate the application of our results and future research, we publicly release our implementations, experiments, and benchmark data via open-source platforms

    Attribute Relationship Analysis in Outlier Mining and Stream Processing

    Get PDF
    The main theme of this thesis is to unite two important fields of data analysis, outlier mining and attribute relationship analysis. In this work we establish the connection between these two fields. We present techniques which exploit this connection, allowing to improve outlier detection in high dimensional data. In the second part of the thesis we extend our work to the emerging topic of data streams

    On the edges of clustering

    Get PDF

    Fractal dimension for clustering and unsupervised and supervised feature selection.

    Get PDF
    Data mining refers to the automation of data analysis to extract patterns from large amounts of data. A major breakthrough in modelling natural patterns is the recognition that nature is fractal, not Euclidean. Fractals are capable of modelling self-similarity, infinite details, infinite length and the absence of smoothness. This research was aimed at simplifying the discovery and detection of groups in data using fractal dimension. These data mining tasks were addressed efficiently. The first task defines groups of instances (clustering), the second selects useful features from non-defined (unsupervised) groups of instances and the third selects useful features from pre-defined (supervised) groups of instances. Improvements are shown on two data mining classification models: hierarchical clustering and Artificial Neural Networks (ANN). For clustering tasks, a new two-phase clustering algorithm based on the Fractal Dimension (FD), compactness and closeness of clusters is presented. The proposed method, uses self-similarity properties of the data, first divides the data into sufficiently large sub-clusters with high compactness. In the second stage, the algorithm merges the sub-clusters that are close to each other and have similar complexity. The final clusters are obtained through a very natural and fully deterministic way. The selection of different feature subspaces leads to different cluster interpretations. An unsupervised embedded feature selection algorithm, able to detect relevant and redundant features, is presented. This algorithm is based on the concept of fractal dimension. The level of relevance in the features is quantified using a new proposed entropy measure, which is less complex than the current state-of-the-art technology. The proposed algorithm is able to maintain and in some cases improve the quality of the clusters in reduced feature spaces. For supervised feature selection, for classification purposes, a new algorithm is proposed that maximises the relevance and minimises the redundancy of the features simultaneously. This algorithm makes use of the FD and the Mutual Information (MI) techniques, and combines them to create a new measure of feature usefulness and to produce a simpler and non-heuristic algorithm. The similar nature of the two techniques, FD and MI, makes the proposed algorithm more suitable for a straightforward global analysis of the data
    corecore