64 research outputs found
SeqScout: Using a Bandit Model to Discover Interesting Subgroups in Labeled Sequences
International audienceIt is extremely useful to exploit labeled datasets not only to learn models but also to improve our understanding of a domain and its available targeted classes. The so-called subgroup discovery task has been considered for a long time. It concerns the discovery of patterns or descriptions, the set of supporting objects of which have interesting properties, e.g., they characterize or discriminate a given target class. Though many subgroup discovery algorithms have been proposed for transactional data, discovering subgroups within labeled sequential data and thus searching for descriptions as sequential patterns has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. We propose the algorithm SeqScout to discover interesting subgroups (w.r.t. a chosen quality measure) from labeled sequences of itemsets. This is a new sampling algorithm that mines discriminant sequential patterns using a multi-armed bandit model. It is an anytime algorithm that, for a given budget, finds a collection of local optima in the search space of descriptions and thus subgroups. It requires a light configuration and it is independent from the quality measure used for pattern scoring. Furthermore, it is fairly simple to implement. We provide qualitative and quantitative experiments on several datasets to illustrate its added-value
Diverse Rule Sets
While machine-learning models are flourishing and transforming many aspects
of everyday life, the inability of humans to understand complex models poses
difficulties for these models to be fully trusted and embraced. Thus,
interpretability of models has been recognized as an equally important quality
as their predictive power. In particular, rule-based systems are experiencing a
renaissance owing to their intuitive if-then representation.
However, simply being rule-based does not ensure interpretability. For
example, overlapped rules spawn ambiguity and hinder interpretation. Here we
propose a novel approach of inferring diverse rule sets, by optimizing small
overlap among decision rules with a 2-approximation guarantee under the
framework of Max-Sum diversification. We formulate the problem as maximizing a
weighted sum of discriminative quality and diversity of a rule set.
In order to overcome an exponential-size search space of association rules,
we investigate several natural options for a small candidate set of
high-quality rules, including frequent and accurate rules, and examine their
hardness. Leveraging the special structure in our formulation, we then devise
an efficient randomized algorithm, which samples rules that are highly
discriminative and have small overlap. The proposed sampling algorithm
analytically targets a distribution of rules that is tailored to our objective.
We demonstrate the superior predictive power and interpretability of our
model with a comprehensive empirical study against strong baselines
FSSD - A Fast and Efficient Algorithm for Subgroup Set Discovery
International audienceSubgroup discovery (SD) is the task of discovering interpretable patterns in the data that stand out w.r.t. some property of interest. Discovering patterns that accurately discriminate a class from the others is one of the most common SD tasks. Standard approaches of the literature are based on local pattern discovery, which is known to provide an overwhelmingly large number of redundant patterns. To solve this issue, pattern set mining has been proposed: instead of evaluating the quality of patterns separately, one should consider the quality of a pattern set as a whole. The goal is to provide a small pattern set that is diverse and well-discriminant to the target class. In this work, we introduce a novel formulation of the task of diverse subgroup set discovery where both discriminative power and diversity of the subgroup set are incorporated in the same quality measure. We propose an efficient and parameter-free algorithm dubbed FSSD and based on a greedy scheme. FSSD uses several optimization strategies that enable to efficiently provide a high quality pattern set in a short amount of time
A genetic algorithm coupled with tree-based pruning for mining closed association rules
Due to the voluminous amount of itemsets that are generated, the association rules extracted from these itemsets contain redundancy, and designing an effective approach to address this issue is of paramount importance. Although multiple algorithms were proposed in recent years for mining closed association rules most of them underperform in terms of run time or memory. Another issue that remains challenging is the nature of the dataset. While some of the existing algorithms perform well on dense datasets others perform well on sparse datasets. This paper aims to handle these drawbacks by using a genetic algorithm for mining closed association rules. Recent studies have shown that genetic algorithms perform better than conventional algorithms due to their bitwise operations of crossover and mutation. Bitwise operations are predominantly faster than conventional approaches and bits consume lesser memory thereby improving the overall performance of the algorithm. To address the redundancy in the mined association rules a tree-based pruning algorithm has been designed here. This works on the principle of minimal antecedent and maximal consequent. Experiments have shown that the proposed approach works well on both dense and sparse datasets while surpassing existing techniques with regard to run time and memory
Efficient learning of large sets of locally optimal classification rules
Conventional rule learning algorithms aim at finding a set of simple rules,
where each rule covers as many examples as possible. In this paper, we argue
that the rules found in this way may not be the optimal explanations for each
of the examples they cover. Instead, we propose an efficient algorithm that
aims at finding the best rule covering each training example in a greedy
optimization consisting of one specialization and one generalization loop.
These locally optimal rules are collected and then filtered for a final rule
set, which is much larger than the sets learned by conventional rule learning
algorithms. A new example is classified by selecting the best among the rules
that cover this example. In our experiments on small to very large datasets,
the approach's average classification accuracy is higher than that of
state-of-the-art rule learning algorithms. Moreover, the algorithm is highly
efficient and can inherently be processed in parallel without affecting the
learned rule set and so the classification accuracy. We thus believe that it
closes an important gap for large-scale classification rule induction.Comment: article, 40 pages, Machine Learning journal (2023
Mining Predictive Patterns and Extension to Multivariate Temporal Data
An important goal of knowledge discovery is the search for patterns in the data that can help explaining its underlying structure. To be practically useful, the discovered patterns should be novel (unexpected) and easy to understand by humans. In this thesis, we study the problem of mining patterns (defining subpopulations of data instances) that are important for predicting and explaining a specific outcome variable. An example is the task of identifying groups of patients that respond better to a certain treatment than the rest of the patients.
We propose and present efficient methods for mining predictive patterns for both atemporal and temporal (time series) data. Our first method relies on frequent pattern mining to explore the search space. It applies a novel evaluation technique for extracting a small set of frequent patterns that are highly predictive and have low redundancy. We show the benefits of this method on several synthetic and public datasets.
Our temporal pattern mining method works on complex multivariate temporal data, such as electronic health records, for the event detection task. It first converts time series into time-interval sequences of temporal abstractions and then mines temporal patterns backwards in time, starting from patterns related to the most recent observations. We show the benefits of our temporal pattern mining method on two real-world clinical tasks
Mining time-series data using discriminative subsequences
Time-series data is abundant, and must be analysed to extract usable knowledge. Local-shape-based methods offer improved performance for many problems, and a
comprehensible method of understanding both data and models.
For time-series classification, we transform the data into a local-shape space using a shapelet transform. A shapelet is a time-series subsequence that is discriminative
of the class of the original series. We use a heterogeneous ensemble classifier on the transformed data. The accuracy of our method is significantly better than the time-series classification benchmark (1-nearest-neighbour with dynamic time-warping distance), and significantly better than the previous best shapelet-based classifiers.
We use two methods to increase interpretability: First, we cluster the shapelets using a novel, parameterless clustering method based on Minimum Description Length,
reducing dimensionality and removing duplicate shapelets. Second, we transform the shapelet data into binary data reflecting the presence or absence of particular
shapelets, a representation that is straightforward to interpret and understand.
We supplement the ensemble classifier with partial classifocation. We generate rule sets on the binary-shapelet data, improving performance on certain classes, and revealing the relationship between the shapelets and the class label. To aid interpretability, we use a novel algorithm, BruteSuppression, that can substantially reduce
the size of a rule set without negatively affecting performance, leading to a more compact, comprehensible model.
Finally, we propose three novel algorithms for unsupervised mining of approximately repeated patterns in time-series data, testing their performance in terms of
speed and accuracy on synthetic data, and on a real-world electricity-consumption device-disambiguation problem. We show that individual devices can be found automatically
and in an unsupervised manner using a local-shape-based approach
LC an effective classification based association rule mining algorithm
Classification using association rules is a research field in data mining that primarily uses association rule discovery techniques in classification benchmarks. It has been confirmed by many research studies in the literature that classification using association tends to generate more predictive classification systems than traditional classification data mining techniques like probabilistic, statistical and decision tree. In this thesis, we introduce a novel data mining algorithm based on classification using association called “Looking at the Class” (LC), which can be used in for mining a range of classification data sets. Unlike known algorithms in classification using the association approach such as Classification based on Association rule (CBA) system and Classification based on Predictive Association (CPAR) system, which merge disjoint items in the rule learning step without anticipating the class label similarity, the proposed algorithm merges only items with identical class labels. This saves too many unnecessary items combining during the rule learning step, and consequently results in large saving in computational time and memory.
Furthermore, the LC algorithm uses a novel prediction procedure that employs multiple rules to make the prediction decision instead of a single rule. The proposed algorithm has been evaluated thoroughly on real world security data sets collected using an automated tool developed at Huddersfield University. The security application which we have considered in this thesis is about categorizing websites based on their features to legitimate or fake which is a typical binary classification problem. Also, experimental results on a number of UCI data sets have been conducted and the measures used for evaluation is the classification accuracy, memory usage, and others. The results show that LC algorithm outperformed traditional classification algorithms such as C4.5, PART and Naïve Bayes as well as known classification based association algorithms like CBA with respect to classification accuracy, memory usage, and execution time on most data sets we consider
- …