164 research outputs found

    Evaluation and optimization of frequent association rule based classification

    Get PDF
    Deriving useful and interesting rules from a data mining system is an essential and important task. Problems such as the discovery of random and coincidental patterns or patterns with no significant values, and the generation of a large volume of rules from a database commonly occur. Works on sustaining the interestingness of rules generated by data mining algorithms are actively and constantly being examined and developed. In this paper, a systematic way to evaluate the association rules discovered from frequent itemset mining algorithms, combining common data mining and statistical interestingness measures, and outline an appropriated sequence of usage is presented. The experiments are performed using a number of real-world datasets that represent diverse characteristics of data/items, and detailed evaluation of rule sets is provided. Empirical results show that with a proper combination of data mining and statistical analysis, the framework is capable of eliminating a large number of non-significant, redundant and contradictive rules while preserving relatively valuable high accuracy and coverage rules when used in the classification problem. Moreover, the results reveal the important characteristics of mining frequent itemsets, and the impact of confidence measure for the classification task

    Generating High Precision Classification Rules for Screening of Irrelevant Studies in Systematic Review Literature Searches

    Get PDF
    Systematic reviews aim to produce repeatable, unbiased, and comprehensive answers to clinical questions. Systematic reviews are an essential component of modern evidence based medicine, however due to the risks of omitting relevant research they are highly time consuming to create and are largely conducted manually. This thesis presents a novel framework for partial automation of systematic review literature searches. We exploit the ubiquitous multi-stage screening process by training the classifier using annotations made by reviewers in previous screening stages. Our approach has the benefit of integrating seamlessly with the existing screening process, minimising disruption to users. Ideally, classification models for systematic reviews should be easily interpretable by users. We propose a novel, rule based algorithm for use with our framework. A new approach for identifying redundant associations when generating rules is also presented. The proposed approach to redundancy seeks to both exclude redundant specialisations of existing rules (those with additional terms in their antecedent), as well as redundant generalisations (those with fewer terms in their antecedent). We demonstrate the ability of the proposed approach to improve the usability of the generated rules. The proposed rule based algorithm is evaluated by simulated application to several existing systematic reviews. Workload savings of up to 10% are demonstrated. There is an increasing demand for systematic reviews related to a variety of clinical disciplines, such as diagnosis. We examine reviews of diagnosis and contrast them against more traditional systematic reviews of treatment. We demonstrate existing challenges such as target class heterogeneity and high data imbalance are even more pronounced for this class of reviews. The described algorithm accounts for this by seeking to label subsets of non-relevant studies with high precision, avoiding the need to generate a high recall model of the minority class

    Mining unexpected patterns using decision trees and interestingness measures: a case study of endometriosis

    Get PDF
    [[abstract]]Because clinical research is carried out in complex environments, prior domain knowledge, constraints, and expert knowledge can enhance the capabilities and performance of data mining. In this paper we propose an unexpected pattern mining model that uses decision trees to compare recovery rates of two different treatments, and to find patterns that contrast with the prior knowledge of domain users. In the proposed model we define interestingness measures to determine whether the patterns found are interesting to the domain. By applying the concept of domain-driven data mining, we repeatedly utilize decision trees and interestingness measures in a closed-loop, in-depth mining process to find unexpected and interesting patterns. We use retrospective data from transvaginal ultrasound-guided aspirations to show that the proposed model can successfully compare different treatments using a decision tree, which is a new usage of that tool. We believe that unexpected, interesting patterns may provide clinical researchers with different perspectives for future research.[[incitationindex]]SCI[[incitationindex]]EI[[booktype]]紙本[[booktype]]電子

    Generalised Interaction Mining: Probabilistic, Statistical and Vectorised Methods in High Dimensional or Uncertain Databases

    Get PDF
    Knowledge Discovery in Databases (KDD) is the non-trivial process of identifying valid, novel, useful and ultimately understandable patterns in data. The core step of the KDD process is the application of Data Mining (DM) algorithms to efficiently find interesting patterns in large databases. This thesis concerns itself with three inter-related themes: Generalised interaction and rule mining; the incorporation of statistics into novel data mining approaches; and probabilistic frequent pattern mining in uncertain databases. An interaction describes an effect that variables have -- or appear to have -- on each other. Interaction mining is the process of mining structures on variables describing their interaction patterns -- usually represented as sets, graphs or rules. Interactions may be complex, represent both positive and negative relationships, and the presence of interactions can influence another interaction or variable in interesting ways. Finding interactions is useful in domains ranging from social network analysis, marketing, the sciences, e-commerce, to statistics and finance. Many data mining tasks may be considered as mining interactions, such as clustering; frequent itemset mining; association rule mining; classification rules; graph mining; flock mining; etc. Interaction mining problems can have very different semantics, pattern definitions, interestingness measures and data types. Solving a wide range of interaction mining problems at the abstract level, and doing so efficiently -- ideally more efficiently than with specialised approaches, is a challenging problem. This thesis introduces and solves the Generalised Interaction Mining (GIM) and Generalised Rule Mining (GRM) problems. GIM and GRM use an efficient and intuitive computational model based purely on vector valued functions. The semantics of the interactions, their interestingness measures and the type of data considered are flexible components of vectorised frameworks. By separating the semantics of a problem from the algorithm used to mine it, the frameworks allow both to vary independently of each other. This makes it easier to develop new methods by focusing purely on a problem's semantics and removing the burden of designing an efficient algorithm. By encoding interactions as vectors in the space (or a sub-space) of samples, they provide an intuitive geometric interpretation that inspires novel methods. By operating in time linear in the number of interesting interactions that need to be examined, the GIM and GRM algorithms are optimal. The use of GRM or GIM provides efficient solutions to a range of problems in this thesis, including graph mining, counting based methods, itemset mining, clique mining, a clustering problem, complex pattern mining, negative pattern mining, solving an optimisation problem, spatial data mining, probabilistic itemset mining, probabilistic association rule mining, feature selection and generation, classification and multiplication rule mining. Data mining is a hypothesis generating endeavour, examining large databases for patterns suggesting novel and useful knowledge to the user. Since the database is a sample, the patterns found should describe hypotheses about the underlying process generating the data. In searching for these patterns, a DM algorithm makes additional hypothesis when it prunes the search space. Natural questions to ask then, are: "Does the algorithm find patterns that are statistically significant?" and "Did the algorithm make significant decisions during its search?". Such questions address the quality of patterns found though data mining and the confidence that a user can have in utilising them. Finally, statistics has a range of useful tools and measures that are applicable in data mining. In this context, this thesis incorporates statistical techniques -- in particular, non-parametric significance tests and correlation -- directly into novel data mining approaches. This idea is applied to statistically significant and relatively class correlated rule based classification of imbalanced data sets; significant frequent itemset mining; mining complex correlation structures between variables for feature selection; mining correlated multiplication rules for interaction mining and feature generation; and conjunctive correlation rules for classification. The application of GIM or GRM to these problems lead to efficient and intuitive solutions. Frequent itemset mining (FIM) is a fundamental problem in data mining. While it is usually assumed that the items occurring in a transaction are known for certain, in many applications the data is inherently noisy or probabilistic; such as adding noise in privacy preserving data mining applications, aggregation or grouping of records leading to estimated purchase probabilities, and databases capturing naturally uncertain phenomena. The consideration of existential uncertainty of item(sets) makes traditional techniques inapplicable. Prior to the work in this thesis, itemsets were mined if their expected support is high. This returns only an estimate, ignores the probability distribution of support, provides no confidence in the results, and can lead to scenarios where itemsets are labeled frequent even if they are more likely to be infrequent. Clearly, this is undesirable. This thesis proposes and solves the Probabilistic Frequent Itemset Mining (PFIM) problem, where itemsets are considered interesting if the probability that they are frequent is high. The problem is solved under the possible worlds model and a proposed probabilistic framework for PFIM. Novel and efficient methods are developed for computing an itemset's exact support probability distribution and frequentness probability, using the Poisson binomial recurrence, generating functions, or a Normal approximation. Incremental methods are proposed to answer queries such as finding the top-k probabilistic frequent itemsets. A number of specialised PFIM algorithms are developed, with each being more efficient than the last: ProApriori is the first solution to PFIM and is based on candidate generation and testing. ProFP-Growth is the first probabilistic FP-Growth type algorithm and uses a proposed probabilistic frequent pattern tree (Pro-FPTree) to avoid candidate generation. Finally, the application of GIM leads to GIM-PFIM; the fastest known algorithm for solving the PFIM problem. It achieves orders of magnitude improvements in space and time usage, and leads to an intuitive subspace and probability-vector based interpretation of PFIM.Knowledge Discovery in Datenbanken (KDD) ist der nicht-triviale Prozess, gültiges, neues, potentiell nützliches und letztendlich verständliches Wissen aus großen Datensätzen zu extrahieren. Der wichtigste Schritt im KDD Prozess ist die Anwendung effizienter Data Mining (DM) Algorithmen um interessante Muster ("Patterns") in Datensätzen zu finden. Diese Dissertation beschäftigt sich mit drei verwandten Themen: Generalised Interaction und Rule Mining, die Einbindung von statistischen Methoden in neue DM Algorithmen und Probabilistic Frequent Itemset Mining (PFIM) in unsicheren Daten. Eine Interaktion ("Interaction") beschreibt den Einfluss, den Variablen aufeinander haben. Interaktionsmining ist der Prozess, Strukturen zwischen Variablen zu finden, die Interaktionsmuster beschreiben. Diese werden gewöhnlicherweise als Mengen, Graphen oder Regeln repräsentiert. Interaktionen können komplex sein und sowohl positive als auch negative Beziehungen repräsentieren. Außerdem kann das Vorhandensein von Interaktionen andere Interaktionen oder Variablen beeinflussen. Interaktionen stellen in Bereichen wie Soziale Netzwerk Analyse, Marketing, Wissenschaft, E-commerce, Statistik und Finanz wertvolle Information dar. Viele DM Methoden können als Interaktionsmining betrachtet werden: Zum Beispiel Clustering, Frequent Itemset Mining, Assoziationsregeln, Klassifikationsregeln, Graph Mining, Flock Mining, usw. Interaktionsmining-Probleme können sehr unterschiedliche Semantik, Musterdefinitionen, Interessantheitsmaße und Datentypen erfordern. Interaktionsmining-Probleme auf breiter und abstrakter Basis effizient -- und im Idealfall effizienter als mit spezialisierten Methoden -- zu lösen, ist ein herausforderndes Problem. Diese Dissertation führt das Generalised Interaction Mining (GIM) und das Generalised Rule Mining (GRM) Problem ein und beschreibt Lösungen für diese. GIM und GRM benutzen ein effizientes und intuitives Berechnungsmodell, das einzig und allein auf vektorbasierten Funktionen beruht. Die Semantik der Interaktionen, ihre Interessantheitsmaße und die Datenarten, sind Komponenten in vektorisierten Frameworks. Die Frameworks ermöglichen die Trennung der Problemsemantik vom Algorithmus, so dass beide unabhängig voneinander geändert werden können. Die Entwicklung neuer Methoden wird dadurch erleichtert, da man sich völlig auf die Problemsemantik fokussieren kann und sich nicht mit der Entwicklung problemspezifischer Algorithmen befassen muss. Die Kodierung der Interaktionen als Vektoren im gesamten Raum (oder Teilraum) der Stichproben stellt eine intuitive geometrische Interpretation dar, die neuartige Methoden inspiriert. Die GRM- und GIM- Algorithmen haben lineare Laufzeit in der Anzahl der Interaktionen die geprüft werden müssen und sind somit optimal. Die Anwendung von GRM oder GIM in dieser Dissertation ermöglicht effiziente Lösungen für eine Reihe von Problemen, wie zum Beispiel Graph Mining, Aufzählungsmethoden, Itemset Mining, Clique Mining, ein Clusteringproblem, das Finden von komplexen und negativen Mustern, die Lösung von Optimierungsproblemen, Spatial Data Mining, probabilistisches Itemset Mining, probabilistisches Mining von Assoziationsregel, Selektion und Erzeugung von Features, Mining von Klassifikations- und Multiplikationsregel, u.v.m. Data Mining ist ein Verfahren, das Hypothesen produziert, indem es in großen Datensätzen Muster findet und damit für den Anwender neues und nützliches Wissen vorschlägt. Da die untersuchte Datenbank ein Resultat des datenerzeugenden Prozesses ist, sollten die gefundenen Muster Erkenntnisse über diesen Prozess liefern. Bei der Suche nach diesen Mustern macht ein DM Algorithmus zusätzliche Hypothesen, wenn Teile des Suchraums ausgeschlossen werden. Die folgenden Fragen sind dabei wichtig: "Findet der Algorithmus statistisch signifikante Muster?" und "Hat der Algorithmus während des Suchprozesses signifikante Entscheidungen getroffen?". Diese Fragen beeinflussen die Qualität der Muster und die Sicherheit die der Anwender in ihrer Benutzung haben kann. Da die Statistik auch eine Reihe von nützlichen Methoden bereitstellt, die für DM anwendbar sind, kombiniert diese Dissertation einige statistische Methoden mit neuen DM Algorithmen, insbesondere nicht-parametrische Signifikanztests und Korrelation. Diese Idee wird für die folgenden Probleme angewandt: Signifikante und "relatively class correlated" regelbasierte Klassifikation in unsymmetrischen Datensätzen, signifikantes Frequent Itemset Mining, Mining von komplizierten Korrelationsstrukturen zwischen Variablen zum Zweck der Featureselektion, Mining von korrelierten Multiplikationsregeln zum Zwecke des Interaktionsminings und Featureerzeugung und konjunktive Korrelationsregeln für die Klassifikation. Die Anwendung von GIM und GRM auf diese Probleme führt zu effizienten und intuitiven Lösungen. Frequent Itemset Mining (FIM) ist ein fundamentales Problem im Data Mining. Obwohl allgemein die Annahme gilt, dass in einer Transaktion enthaltene Items bekannt sind, sind die Daten in vielen Anwendungen unsicher oder probabilistisch. Beispiele sind das Hinzufügen von Rauschen zu Datenschutzzwecken, die Gruppierung von Datensätzen die zu geschätzten Kaufwahrscheinlichkeiten führen und Datensätze deren Herkunft von Natur aus unsicher sind. Die Berücksichtigung von unsicheren Datensätzen verhindert die Anwendung von traditionellen Methoden. Vor der Arbeit in dieser Dissertation wurden Itemsets gesucht, deren erwartetes Vorkommen hoch ist. Diese Methode produziert jedoch nur Schätzwerte, vernachlässigt die Wahrscheinlichkeitsverteilung der Vorkommen, bietet keine Sicherheit für die Genauigkeit der Ergebnisse und kann zu Szenarien führen in denen das Vorkommen als häufig eingestuft wird, obwohl die Wahrscheinlichkeit höher ist, dass sie nur selten vorkommen. Solche Ergebnisse sind natürlich unerwünscht. Diese Dissertation führt das Probabilistic Frequent Itemset Mining (PFIM) ein. Diese Lösung betrachtet Itemsets als interessant, wenn die Wahrscheinlichkeit groß ist, dass sie häufig vorkommen. Die Problemlösung besteht aus der Anwendung des Possible Worlds Models und dem vorgeschlagenen probabilistisches Framework für PFIM. Es werden neue und effiziente Methoden entwickelt um die Wahrscheinlichkeitsverteilung des Vorkommens und die Häufigkeitsverteilung eines Itemsets zu berechnen. Dazu werden die Poisson Binomial Recurrence, Generating Functions, oder eine normalverteilte Annäherung verwendet. Inkrementelle Methoden werden vorgeschlagen um Fragen wie "Finde die top-k Probabilistic Frequent Itemsets" zu beantworten. Mehrere PFIM Algorithmen werden entwickelt, wobei die Effizienz von Algorithmus zu Algorithmus steigt: ProApriori ist die erste Lösung für PFIM und basiert auf erzeugen und testen von Kandidaten. ProFP-Growth ist der erste probabilistische FP-Growth Algorithmus. Er schlägt einen Probabilistic Frequent Pattern Tree (Pro-FPTree) vor, der Kandidatenerzeugung überflüssig macht. Die Anwendung von GIM führt schließlich zu GIM-PFIM, dem schnellsten bekannten Algorithmus zur Lösung des PFIM Problems. Dieser Algorithmus resultiert in einem um Größenordnungen besseren Zeit- und Speicherbedarf, und führt zu einer intuitiven Interpretation von PFIM, basierend auf Unterräumen und Wahrscheinlichkeitsvektoren

    Generalised Interaction Mining: Probabilistic, Statistical and Vectorised Methods in High Dimensional or Uncertain Databases

    Get PDF
    Knowledge Discovery in Databases (KDD) is the non-trivial process of identifying valid, novel, useful and ultimately understandable patterns in data. The core step of the KDD process is the application of Data Mining (DM) algorithms to efficiently find interesting patterns in large databases. This thesis concerns itself with three inter-related themes: Generalised interaction and rule mining; the incorporation of statistics into novel data mining approaches; and probabilistic frequent pattern mining in uncertain databases. An interaction describes an effect that variables have -- or appear to have -- on each other. Interaction mining is the process of mining structures on variables describing their interaction patterns -- usually represented as sets, graphs or rules. Interactions may be complex, represent both positive and negative relationships, and the presence of interactions can influence another interaction or variable in interesting ways. Finding interactions is useful in domains ranging from social network analysis, marketing, the sciences, e-commerce, to statistics and finance. Many data mining tasks may be considered as mining interactions, such as clustering; frequent itemset mining; association rule mining; classification rules; graph mining; flock mining; etc. Interaction mining problems can have very different semantics, pattern definitions, interestingness measures and data types. Solving a wide range of interaction mining problems at the abstract level, and doing so efficiently -- ideally more efficiently than with specialised approaches, is a challenging problem. This thesis introduces and solves the Generalised Interaction Mining (GIM) and Generalised Rule Mining (GRM) problems. GIM and GRM use an efficient and intuitive computational model based purely on vector valued functions. The semantics of the interactions, their interestingness measures and the type of data considered are flexible components of vectorised frameworks. By separating the semantics of a problem from the algorithm used to mine it, the frameworks allow both to vary independently of each other. This makes it easier to develop new methods by focusing purely on a problem's semantics and removing the burden of designing an efficient algorithm. By encoding interactions as vectors in the space (or a sub-space) of samples, they provide an intuitive geometric interpretation that inspires novel methods. By operating in time linear in the number of interesting interactions that need to be examined, the GIM and GRM algorithms are optimal. The use of GRM or GIM provides efficient solutions to a range of problems in this thesis, including graph mining, counting based methods, itemset mining, clique mining, a clustering problem, complex pattern mining, negative pattern mining, solving an optimisation problem, spatial data mining, probabilistic itemset mining, probabilistic association rule mining, feature selection and generation, classification and multiplication rule mining. Data mining is a hypothesis generating endeavour, examining large databases for patterns suggesting novel and useful knowledge to the user. Since the database is a sample, the patterns found should describe hypotheses about the underlying process generating the data. In searching for these patterns, a DM algorithm makes additional hypothesis when it prunes the search space. Natural questions to ask then, are: "Does the algorithm find patterns that are statistically significant?" and "Did the algorithm make significant decisions during its search?". Such questions address the quality of patterns found though data mining and the confidence that a user can have in utilising them. Finally, statistics has a range of useful tools and measures that are applicable in data mining. In this context, this thesis incorporates statistical techniques -- in particular, non-parametric significance tests and correlation -- directly into novel data mining approaches. This idea is applied to statistically significant and relatively class correlated rule based classification of imbalanced data sets; significant frequent itemset mining; mining complex correlation structures between variables for feature selection; mining correlated multiplication rules for interaction mining and feature generation; and conjunctive correlation rules for classification. The application of GIM or GRM to these problems lead to efficient and intuitive solutions. Frequent itemset mining (FIM) is a fundamental problem in data mining. While it is usually assumed that the items occurring in a transaction are known for certain, in many applications the data is inherently noisy or probabilistic; such as adding noise in privacy preserving data mining applications, aggregation or grouping of records leading to estimated purchase probabilities, and databases capturing naturally uncertain phenomena. The consideration of existential uncertainty of item(sets) makes traditional techniques inapplicable. Prior to the work in this thesis, itemsets were mined if their expected support is high. This returns only an estimate, ignores the probability distribution of support, provides no confidence in the results, and can lead to scenarios where itemsets are labeled frequent even if they are more likely to be infrequent. Clearly, this is undesirable. This thesis proposes and solves the Probabilistic Frequent Itemset Mining (PFIM) problem, where itemsets are considered interesting if the probability that they are frequent is high. The problem is solved under the possible worlds model and a proposed probabilistic framework for PFIM. Novel and efficient methods are developed for computing an itemset's exact support probability distribution and frequentness probability, using the Poisson binomial recurrence, generating functions, or a Normal approximation. Incremental methods are proposed to answer queries such as finding the top-k probabilistic frequent itemsets. A number of specialised PFIM algorithms are developed, with each being more efficient than the last: ProApriori is the first solution to PFIM and is based on candidate generation and testing. ProFP-Growth is the first probabilistic FP-Growth type algorithm and uses a proposed probabilistic frequent pattern tree (Pro-FPTree) to avoid candidate generation. Finally, the application of GIM leads to GIM-PFIM; the fastest known algorithm for solving the PFIM problem. It achieves orders of magnitude improvements in space and time usage, and leads to an intuitive subspace and probability-vector based interpretation of PFIM.Knowledge Discovery in Datenbanken (KDD) ist der nicht-triviale Prozess, gültiges, neues, potentiell nützliches und letztendlich verständliches Wissen aus großen Datensätzen zu extrahieren. Der wichtigste Schritt im KDD Prozess ist die Anwendung effizienter Data Mining (DM) Algorithmen um interessante Muster ("Patterns") in Datensätzen zu finden. Diese Dissertation beschäftigt sich mit drei verwandten Themen: Generalised Interaction und Rule Mining, die Einbindung von statistischen Methoden in neue DM Algorithmen und Probabilistic Frequent Itemset Mining (PFIM) in unsicheren Daten. Eine Interaktion ("Interaction") beschreibt den Einfluss, den Variablen aufeinander haben. Interaktionsmining ist der Prozess, Strukturen zwischen Variablen zu finden, die Interaktionsmuster beschreiben. Diese werden gewöhnlicherweise als Mengen, Graphen oder Regeln repräsentiert. Interaktionen können komplex sein und sowohl positive als auch negative Beziehungen repräsentieren. Außerdem kann das Vorhandensein von Interaktionen andere Interaktionen oder Variablen beeinflussen. Interaktionen stellen in Bereichen wie Soziale Netzwerk Analyse, Marketing, Wissenschaft, E-commerce, Statistik und Finanz wertvolle Information dar. Viele DM Methoden können als Interaktionsmining betrachtet werden: Zum Beispiel Clustering, Frequent Itemset Mining, Assoziationsregeln, Klassifikationsregeln, Graph Mining, Flock Mining, usw. Interaktionsmining-Probleme können sehr unterschiedliche Semantik, Musterdefinitionen, Interessantheitsmaße und Datentypen erfordern. Interaktionsmining-Probleme auf breiter und abstrakter Basis effizient -- und im Idealfall effizienter als mit spezialisierten Methoden -- zu lösen, ist ein herausforderndes Problem. Diese Dissertation führt das Generalised Interaction Mining (GIM) und das Generalised Rule Mining (GRM) Problem ein und beschreibt Lösungen für diese. GIM und GRM benutzen ein effizientes und intuitives Berechnungsmodell, das einzig und allein auf vektorbasierten Funktionen beruht. Die Semantik der Interaktionen, ihre Interessantheitsmaße und die Datenarten, sind Komponenten in vektorisierten Frameworks. Die Frameworks ermöglichen die Trennung der Problemsemantik vom Algorithmus, so dass beide unabhängig voneinander geändert werden können. Die Entwicklung neuer Methoden wird dadurch erleichtert, da man sich völlig auf die Problemsemantik fokussieren kann und sich nicht mit der Entwicklung problemspezifischer Algorithmen befassen muss. Die Kodierung der Interaktionen als Vektoren im gesamten Raum (oder Teilraum) der Stichproben stellt eine intuitive geometrische Interpretation dar, die neuartige Methoden inspiriert. Die GRM- und GIM- Algorithmen haben lineare Laufzeit in der Anzahl der Interaktionen die geprüft werden müssen und sind somit optimal. Die Anwendung von GRM oder GIM in dieser Dissertation ermöglicht effiziente Lösungen für eine Reihe von Problemen, wie zum Beispiel Graph Mining, Aufzählungsmethoden, Itemset Mining, Clique Mining, ein Clusteringproblem, das Finden von komplexen und negativen Mustern, die Lösung von Optimierungsproblemen, Spatial Data Mining, probabilistisches Itemset Mining, probabilistisches Mining von Assoziationsregel, Selektion und Erzeugung von Features, Mining von Klassifikations- und Multiplikationsregel, u.v.m. Data Mining ist ein Verfahren, das Hypothesen produziert, indem es in großen Datensätzen Muster findet und damit für den Anwender neues und nützliches Wissen vorschlägt. Da die untersuchte Datenbank ein Resultat des datenerzeugenden Prozesses ist, sollten die gefundenen Muster Erkenntnisse über diesen Prozess liefern. Bei der Suche nach diesen Mustern macht ein DM Algorithmus zusätzliche Hypothesen, wenn Teile des Suchraums ausgeschlossen werden. Die folgenden Fragen sind dabei wichtig: "Findet der Algorithmus statistisch signifikante Muster?" und "Hat der Algorithmus während des Suchprozesses signifikante Entscheidungen getroffen?". Diese Fragen beeinflussen die Qualität der Muster und die Sicherheit die der Anwender in ihrer Benutzung haben kann. Da die Statistik auch eine Reihe von nützlichen Methoden bereitstellt, die für DM anwendbar sind, kombiniert diese Dissertation einige statistische Methoden mit neuen DM Algorithmen, insbesondere nicht-parametrische Signifikanztests und Korrelation. Diese Idee wird für die folgenden Probleme angewandt: Signifikante und "relatively class correlated" regelbasierte Klassifikation in unsymmetrischen Datensätzen, signifikantes Frequent Itemset Mining, Mining von komplizierten Korrelationsstrukturen zwischen Variablen zum Zweck der Featureselektion, Mining von korrelierten Multiplikationsregeln zum Zwecke des Interaktionsminings und Featureerzeugung und konjunktive Korrelationsregeln für die Klassifikation. Die Anwendung von GIM und GRM auf diese Probleme führt zu effizienten und intuitiven Lösungen. Frequent Itemset Mining (FIM) ist ein fundamentales Problem im Data Mining. Obwohl allgemein die Annahme gilt, dass in einer Transaktion enthaltene Items bekannt sind, sind die Daten in vielen Anwendungen unsicher oder probabilistisch. Beispiele sind das Hinzufügen von Rauschen zu Datenschutzzwecken, die Gruppierung von Datensätzen die zu geschätzten Kaufwahrscheinlichkeiten führen und Datensätze deren Herkunft von Natur aus unsicher sind. Die Berücksichtigung von unsicheren Datensätzen verhindert die Anwendung von traditionellen Methoden. Vor der Arbeit in dieser Dissertation wurden Itemsets gesucht, deren erwartetes Vorkommen hoch ist. Diese Methode produziert jedoch nur Schätzwerte, vernachlässigt die Wahrscheinlichkeitsverteilung der Vorkommen, bietet keine Sicherheit für die Genauigkeit der Ergebnisse und kann zu Szenarien führen in denen das Vorkommen als häufig eingestuft wird, obwohl die Wahrscheinlichkeit höher ist, dass sie nur selten vorkommen. Solche Ergebnisse sind natürlich unerwünscht. Diese Dissertation führt das Probabilistic Frequent Itemset Mining (PFIM) ein. Diese Lösung betrachtet Itemsets als interessant, wenn die Wahrscheinlichkeit groß ist, dass sie häufig vorkommen. Die Problemlösung besteht aus der Anwendung des Possible Worlds Models und dem vorgeschlagenen probabilistisches Framework für PFIM. Es werden neue und effiziente Methoden entwickelt um die Wahrscheinlichkeitsverteilung des Vorkommens und die Häufigkeitsverteilung eines Itemsets zu berechnen. Dazu werden die Poisson Binomial Recurrence, Generating Functions, oder eine normalverteilte Annäherung verwendet. Inkrementelle Methoden werden vorgeschlagen um Fragen wie "Finde die top-k Probabilistic Frequent Itemsets" zu beantworten. Mehrere PFIM Algorithmen werden entwickelt, wobei die Effizienz von Algorithmus zu Algorithmus steigt: ProApriori ist die erste Lösung für PFIM und basiert auf erzeugen und testen von Kandidaten. ProFP-Growth ist der erste probabilistische FP-Growth Algorithmus. Er schlägt einen Probabilistic Frequent Pattern Tree (Pro-FPTree) vor, der Kandidatenerzeugung überflüssig macht. Die Anwendung von GIM führt schließlich zu GIM-PFIM, dem schnellsten bekannten Algorithmus zur Lösung des PFIM Problems. Dieser Algorithmus resultiert in einem um Größenordnungen besseren Zeit- und Speicherbedarf, und führt zu einer intuitiven Interpretation von PFIM, basierend auf Unterräumen und Wahrscheinlichkeitsvektoren

    Automatic Identification of Interestingness in Biomedical Literature

    Get PDF
    This thesis presents research on automatically identifying interestingness in a graph of semantic predications. Interestingness represents a subjective quality of information that represents its value in meeting a user\u27s known or unknown retrieval needs. The perception of information as interesting requires a level of utility for the user as well as a balance between significant novelty and sufficient familiarity. It can also be influenced by additional factors such as unexpectedness or serendipity with recent experiences. The ability to identify interesting information facilitates the development of user-centered retrieval, especially in information semantic summarization and iterative, step-wise searching such as in discovery browsing systems. Ultimately, this allows biomedical researchers to more quickly identify information of greatest potential interest to them, whether expected or, perhaps more importantly, unexpected. Current discovery browsing systems use iterative information retrieval to discover new knowledge - a process that requires finding relevant co-occurring topics and relationships through consistent human involvement to identify interesting concepts. Although interestingness is subjective, this thesis identifies computable quantities in semantic data that correlate to interestingness in user searches. We compare several statistical and rule-based models correlating graph data extracted from semantic predications with concept interestingness as demonstrated in PubMed queries. Semantic predications represent scientific assertions extracted from all of the biomedical literature contained in the MEDLINE database. They are of the form, subject-predicate-object . Predications can easily be represented as graphs, where subjects and objects are nodes and predicates form edges. A graph of predications represents the assertions made in the citations from which the predications were extracted. This thesis uses graph metrics to identify features from the predication graph for model generation. These features are based on degree centrality (connectedness) of the seed concept node and surrounding nodes; they are also based on frequency of occurrence measures of the edges between the seed concept and surrounding nodes as well as between the nodes surrounding the seed concept and the neighbors of those nodes. A PubMed query log is used for training and testing models for interestingness. This log contains a set of user searches over a 24-hour period, and we make the assumption that co-occurrence of concepts with the seed concept in searches demonstrates interestingness of that concept with regard to the seed concept. Graph generation begins by the selection of a set of all predications containing the seed concept from the Semantic Medline database (our training dataset uses Alzheimer\u27s disease as the seed concept). The graph is built with the seed concept as the central node. Additional nodes are added for each concept that occurs with the seed concept in the initial predications and an edge is created for each instance of a predication containing the two concepts. The edges are labeled with the specific predicate in the predication. This graph is extended to include additional nodes within two leaps from the seed concept. The concepts in the PubMed query logs are normalized to UMLS concepts or Entrez Gene symbols using MetaMap. Token-based and user-based counts are collected for each co-occurring term. These measures are combined to create a weighted score which is used to determine three potential thresholds of interestingness based on deviation from the mean score. The concepts that are included in both the graph and the normalized log data are identified for use in model training and testing

    Data mining in manufacturing: a review based on the kind of knowledge

    Get PDF
    In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas, including product and process design, assembly, materials planning, quality control, scheduling, maintenance, fault detection etc. Data mining has emerged as an important tool for knowledge acquisition from the manufacturing databases. This paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. The papers reviewed have therefore been categorized in these five categories. It has been shown that there is a rapid growth in the application of data mining in the context of manufacturing processes and enterprises in the last 3 years. This review reveals the progressive applications and existing gaps identified in the context of data mining in manufacturing. A novel text mining approach has also been used on the abstracts and keywords of 150 papers to identify the research gaps and find the linkages between knowledge area, knowledge type and the applied data mining tools and techniques

    Developing and deploying data mining techniques in healthcare

    Get PDF
    Improving healthcare is a top priority for all nations. US healthcare expenditure was $3 trillion in 2014. In the same year, the share of GDP assigned to healthcare expenditure was 17.5%. These statistics shows the importance of making improvement in healthcare delivery system. In this research, we developed several data mining methods and algorithms to address healthcare problems. These methods can also be applied to the problems in other domains.The first part of this dissertation is about rare item problem in association analysis. This problem deals with the discovering rare rules, which include rare items. In this study, we introduced a novel assessment metric, called adjusted support to address this problem. By applying this metric, we can retrieve rare rules without over-generating association rules. We applied this method to perform association analysis on complications of diabetes.The second part of this dissertation is developing a clinical decision support system for predicting retinopathy. Retinopathy is the leading cause of vision loss among American adults. In this research, we analyzed data from more than 1.4 million diabetic patients and developed four sets of predictive models: basic, comorbid, over-sampled, and ensemble models. The results show that incorporating comorbidity data and oversampling improved the accuracy of prediction. In addition, we developed a novel "confidence margin" ensemble approach that outperformed the existing ensemble models. In ensemble models, we also addressed the issue of tie in voting-based ensemble models by comparing the confidence margins of the base predictors.The third part of this dissertation addresses the problem of imbalanced data learning, which is a major challenge in machine learning. While a standard machine learning technique could have a good performance on balanced datasets, when applied to imbalanced datasets its performance deteriorates dramatically. This poor performance is rather troublesome especially in detecting the minority class that usually is the class of interest. In this study, we proposed a synthetic informative minority over-sampling (SIMO) algorithm embedded into support vector machine. We applied SIMO to 15 publicly available benchmark datasets and assessed its performance in comparison with seven existing approaches. The results showed that SIMO outperformed all existing approaches

    Associative classifier coupled with unsupervised feature reduction for dengue fever classification using gene expression data

    Get PDF
    Recent studies have established the potential of classifiers designed using association rule mining methods. The current study proposes such an associative classifier to efficiently detect dengue fever using gene expression data. Labelled gene expression data has been preprocessed and discretized to mine association rules using well-established rule mining methods. Thereafter, unsupervised clustering methods have been applied to the discretized gene expression data to reduce and select the most promising features. The final feature reduced discretized gene expression data is subsequently used to mine rules in order to classify subjects into 'Dengue Fever' or 'Healthy'. Two well-known association rule mining methods, viz., Apriori and FP-Growth, have been used here along with various types of well established clustering methods. Extensive analysis has been reported with performance parameters in terms of accuracy, precision, recall and false positive rate using 5-fold cross-validation. Furthermore, a separate investigation has been conducted to find the most suitable number of features and confidence of association rule mining methods. The experimental results obtained indicate accurate detection of dengue fever patients at an early stage using the proposed associative classification method.Web of Science10883538834
    corecore