1,140 research outputs found

    Diamond Dicing

    Get PDF
    In OLAP, analysts often select an interesting sample of the data. For example, an analyst might focus on products bringing revenues of at least 100 000 dollars, or on shops having sales greater than 400 000 dollars. However, current systems do not allow the application of both of these thresholds simultaneously, selecting products and shops satisfying both thresholds. For such purposes, we introduce the diamond cube operator, filling a gap among existing data warehouse operations. Because of the interaction between dimensions the computation of diamond cubes is challenging. We compare and test various algorithms on large data sets of more than 100 million facts. We find that while it is possible to implement diamonds in SQL, it is inefficient. Indeed, our custom implementation can be a hundred times faster than popular database engines (including a row-store and a column-store).Comment: 29 page

    Pattern Mining and Sense-Making Support for Enhancing the User Experience

    Get PDF
    While data mining techniques such as frequent itemset and sequence mining are well established as powerful pattern discovery tools in domains from science, medicine to business, a detriment is the lack of support for interactive exploration of high numbers of patterns generated with diverse parameter settings and the relationships among the mined patterns. To enhance the user experience, real-time query turnaround times and improved support for interactive mining are desired. There is also an increasing interest in applying data mining solutions for mobile data. Patterns mined over mobile data may enable context-aware applications ranging from automating frequently repeated tasks to providing personalized recommendations. Overall, this dissertation addresses three problems that limit the utility of data mining, namely, (a.) lack of interactive exploration tools for mined patterns, (b.) insufficient support for mining localized patterns, and (c.) high computational mining requirements prohibiting mining of patterns on smaller compute units such as a smartphone. This dissertation develops interactive frameworks for the guided exploration of mined patterns and their relationships. Contributions include the PARAS pre- processing and indexing framework; enabling analysts to gain key insights into rule relationships in a parameter space view due to the compact storage of rules that enables query-time reconstruction of complete rulesets. Contributions also include the visual rule exploration framework FIRE that presents an interactive dual view of the parameter space and the rule space, that together enable enhanced sense-making of rule relationships. This dissertation also supports the online mining of localized association rules computed on data subsets by selectively deploying alternative execution strategies that leverage multidimensional itemset-based data partitioning index. Finally, we designed OLAPH, an on-device context-aware service that learns phone usage patterns over mobile context data such as app usage, location, call and SMS logs to provide device intelligence. Concepts introduced for modeling mobile data as sequences include compressing context logs to intervaled context events, adding generalized time features, and identifying meaningful sequences via filter expressions

    Determinação das regras de associação de variáveis de tempo ponderadas baseadas em utilidades mediante a aplicação de uma árvore de padrões frequentes

    Get PDF
    Introduction: The present research was conducted at Birla Institute of Technology, off Campus in Noida, India, in 2017. Methods: To assess the efficiency of the proposed approach for information mining a method and an algorithm were proposed for mining time-variant weighted, utility-based association rules using fp-tree. Results: A method is suggested to find association rules on time-oriented frequency-weighted, utility-based data, employing a hierarchy for pulling-out item-sets and establish their association. Conclusions: The dimensions adopted while developing the approach compressed a large time-variant dataset to a smaller data structure at the same time fp-tree was kept away from the repetitive dataset, which finally gave us a noteworthy advantage in articulations of time and memory use. Originality: In the current period, high utility recurrent-pattern pulling-out is one of the mainly noteworthy study areas in time-variant information mining due to its capability to account for the frequency rate of item-sets and assorted utility rates of every item-set. This research contributes to maintain it at a corresponding level, which ensures to avoid generating a big amount of candidate-sets, which ensures further development of less execution time and search spaces. Limitations: The research results demonstrated that the projected approach was efficient on tested datasets with pre-defined weight and utility calculations.Introducción: la presente investigación se realizó en el Birla Institute of Technology, fuera del campus en Noida, India, en 2017. Métodos: para evaluar la eficacia del enfoque propuesto para la minería de información, se propusieron un método y un algoritmo para minar las reglas de asociación basadas en la utilidad ponderada en el tiempo usando un árbol de patrones frecuentes (fp). Resultados: se sugiere un método para encontrar reglas de asociación en datos basados en la utilidad ponderada en frecuencia orientada al tiempo, que emplea una jerarquía para extraer conjuntos de elementos y establecer su asociación. Conclusiones: las dimensiones adoptadas al desarrollar el enfoque comprimieron un gran conjunto de datos de variante de tiempo hasta alcanzar una estructura de datos más pequeña. A su vez, el árbol fp se mantuvo alejado del conjunto de datos repetitivos, lo que finalmente generó una ventaja considerable en tiempo y uso de memoria. Originalidad: en la actualidad, la extracción de patrones recurrentes de alta utilidad es una de las áreas de estudio más desarrollada en la minería de información con respecto a la variable temporal debido a su capacidad de dar cuenta de la frecuencia de los conjuntos de elementos y las tasas de servicios varios de cada conjunto de elementos. Esta investigación contribuye a mantener el estudio sobre el tema a un buen nivel, lo que permite evitar generar una gran cantidad de conjuntos posibles, y por ende garantiza mayor desarrollo en menores tiempos de ejecución y espacios de búsqueda. Limitaciones: Los resultados de la investigación demostraron que la aproximación fue eficiente en conjuntos de datos probados con cálculos predefinidos de peso y utilidad.Introdução: esta pesquisa foi realizada no Instituto Birla de Tecnologia e Ciência, fora do campus, em Noida, na Índia, em 2017. Métodos: para avaliar a eficácia do enfoque proposto para mineração de informação, foram propostos um método e um algoritmo para minerar as regras de associação baseadas na utilidade ponderada no tempo usando uma árvore de padrões frequentes (fp).Resultados: é recomendado um método para encontrar regras de associação nos dados baseados na utilidade ponderada em frequência orientada ao tempo, que emprega uma hierarquia para extrair conjuntos de elementos e estabelecer a associação entre eles.Conclusões: as dimensões utilizadas ao desenvolver o enfoque comprimiram um grande conjunto de dados de variante de tempo até alcançar uma estrutura de dados menor, enquanto isso, a árvore fp se manteve distante do conjunto de dados repetitivos, o que finalmente gerou uma vantagem considerável em tempo e uso de memória.Originalidade: na atualidade, a extração de padrões recorrentes de alta utilidade é uma das áreas de estudo mais desenvolvidas na mineração de informação com respeito à variável temporal, devido a sua capacidade de dar conta da frequência dos conjuntos de elementos e das taxas de serviços vários de cada conjunto de elementos. Esta pesquisa ajuda a manter o estudo desse tema em um nível avançado, o que garante evitar gerar uma grande quantidade de conjuntos possíveis e, dessa forma, um maior desenvolvimento em um menor tempo de execução e espaço de busca.Limitações: os resultados da pesquisa demonstraram que a aproximação foi eficiente em conjuntos de dados provados com cálculos predefinidos de peso e utilidade

    Implementation of Slicing for Multiple Column Multiple Attributes Privacy Preserving Data Publishing

    Get PDF
    Latest work shows that abstraction loses amount of information for high spatial data. There are several anonymization techniques like Abstraction, Containerization for privacy preserving small data publishing. Bucketization does not avoid enrollment acknowledgment and does not give clear separation between aspects. We are presenting a technique called slicing for multiple columns multiple attributes which partitions the data both horizontally and vertically. We also show that slicing conserves better data service than generalization and bucketization and can be used for enrollment acknowledgment conservation. Slicing can be used for aspect acknowledgment conservation and establishing an efficient algorithm for computing the sliced data that obey the l-diversity requirement Our workload confirm that this technique is used to prevent membership disclosure and it also used to increase the data utility and privacy of a sliced dataset by allowing multiple column multiple attributes slicing while maintaining the prevention of membership disclosure. DOI: 10.17762/ijritcc2321-8169.150615

    Scalable Mining of High-Utility Sequential Patterns With Three-Tier MapReduce Model

    Get PDF
    High-utility sequential pattern mining (HUSPM) is a hot research topic in recent decades since it combines both sequential and utility properties to reveal more information and knowledge rather than the traditional frequent itemset mining or sequential pattern mining. Several works of HUSPM have been presented but most of them are based on main memory to speed up mining performance. However, this assumption is not realistic and not suitable in large-scale environments since in real industry, the size of the collected data is very huge and it is impossible to fit the data into the main memory of a single machine. In this article, we first develop a parallel and distributed three-stage MapReduce model for mining high-utility sequential patterns based on large-scale databases. Two properties are then developed to hold the correctness and completeness of the discovered patterns in the developed framework. In addition, two data structures called sidset and utility-linked list are utilized in the developed framework to accelerate the computation for mining the required patterns. From the results, we can observe that the designed model has good performance in large-scale datasets in terms of runtime, memory, efficiency of the number of distributed nodes, and scalability compared to the serial HUSP-Span approach.acceptedVersio

    Coping with new Challenges in Clustering and Biomedical Imaging

    Get PDF
    The last years have seen a tremendous increase of data acquisition in different scientific fields such as molecular biology, bioinformatics or biomedicine. Therefore, novel methods are needed for automatic data processing and analysis of this large amount of data. Data mining is the process of applying methods like clustering or classification to large databases in order to uncover hidden patterns. Clustering is the task of partitioning points of a data set into distinct groups in order to minimize the intra cluster similarity and to maximize the inter cluster similarity. In contrast to unsupervised learning like clustering, the classification problem is known as supervised learning that aims at the prediction of group membership of data objects on the basis of rules learned from a training set where the group membership is known. Specialized methods have been proposed for hierarchical and partitioning clustering. However, these methods suffer from several drawbacks. In the first part of this work, new clustering methods are proposed that cope with problems from conventional clustering algorithms. ITCH (Information-Theoretic Cluster Hierarchies) is a hierarchical clustering method that is based on a hierarchical variant of the Minimum Description Length (MDL) principle which finds hierarchies of clusters without requiring input parameters. As ITCH may converge only to a local optimum we propose GACH (Genetic Algorithm for Finding Cluster Hierarchies) that combines the benefits from genetic algorithms with information-theory. In this way the search space is explored more effectively. Furthermore, we propose INTEGRATE a novel clustering method for data with mixed numerical and categorical attributes. Supported by the MDL principle our method integrates the information provided by heterogeneous numerical and categorical attributes and thus naturally balances the influence of both sources of information. A competitive evaluation illustrates that INTEGRATE is more effective than existing clustering methods for mixed type data. Besides clustering methods for single data objects we provide a solution for clustering different data sets that are represented by their skylines. The skyline operator is a well-established database primitive for finding database objects which minimize two or more attributes with an unknown weighting between these attributes. In this thesis, we define a similarity measure, called SkyDist, for comparing skylines of different data sets that can directly be integrated into different data mining tasks such as clustering or classification. The experiments show that SkyDist in combination with different clustering algorithms can give useful insights into many applications. In the second part, we focus on the analysis of high resolution magnetic resonance images (MRI) that are clinically relevant and may allow for an early detection and diagnosis of several diseases. In particular, we propose a framework for the classification of Alzheimer's disease in MR images combining the data mining steps of feature selection, clustering and classification. As a result, a set of highly selective features discriminating patients with Alzheimer and healthy people has been identified. However, the analysis of the high dimensional MR images is extremely time-consuming. Therefore we developed JGrid, a scalable distributed computing solution designed to allow for a large scale analysis of MRI and thus an optimized prediction of diagnosis. In another study we apply efficient algorithms for motif discovery to task-fMRI scans in order to identify patterns in the brain that are characteristic for patients with somatoform pain disorder. We find groups of brain compartments that occur frequently within the brain networks and discriminate well among healthy and diseased people

    Multi-Source Spatial Entity Extraction and Linkage

    Get PDF

    A study on text-score disagreement in online reviews

    Get PDF
    In this paper, we focus on online reviews and employ artificial intelligence tools, taken from the cognitive computing field, to help understanding the relationships between the textual part of the review and the assigned numerical score. We move from the intuitions that 1) a set of textual reviews expressing different sentiments may feature the same score (and vice-versa); and 2) detecting and analyzing the mismatches between the review content and the actual score may benefit both service providers and consumers, by highlighting specific factors of satisfaction (and dissatisfaction) in texts. To prove the intuitions, we adopt sentiment analysis techniques and we concentrate on hotel reviews, to find polarity mismatches therein. In particular, we first train a text classifier with a set of annotated hotel reviews, taken from the Booking website. Then, we analyze a large dataset, with around 160k hotel reviews collected from Tripadvisor, with the aim of detecting a polarity mismatch, indicating if the textual content of the review is in line, or not, with the associated score. Using well established artificial intelligence techniques and analyzing in depth the reviews featuring a mismatch between the text polarity and the score, we find that -on a scale of five stars- those reviews ranked with middle scores include a mixture of positive and negative aspects. The approach proposed here, beside acting as a polarity detector, provides an effective selection of reviews -on an initial very large dataset- that may allow both consumers and providers to focus directly on the review subset featuring a text/score disagreement, which conveniently convey to the user a summary of positive and negative features of the review target.Comment: This is the accepted version of the paper. The final version will be published in the Journal of Cognitive Computation, available at Springer via http://dx.doi.org/10.1007/s12559-017-9496-
    corecore