1,651 research outputs found

    Improving modular classification rule induction with G-Prism using dynamic rule term boundaries

    Get PDF
    Modular classification rule induction for predictive analytics is an alternative and expressive approach to rule induction as opposed to decision tree based classifiers. Prism classifiers achieve a similar classification accuracy compared with decision trees, but tend to overfit less, especially if there is noise in the data. This paper describes the development of a new member of the Prism family, the G-Prism classifier, which improves the classification performance of the classifier. G-Prism is different compared with the remaining members of the Prism family as it follows a different rule term induction strategy. G-Prism’s rule term induction strategy is based on Gauss Probability Density Distribution (GPDD) of target classes rather than simple binary splits (local discretisation). Two versions of G-Prism have been developed, one uses fixed boundaries to build rule terms from GPDD and the other uses dynamic rule term boundaries. Both versions have been compared empirically against Prism on 11 datasets using various evaluation metrics. The results show that in most cases both versions of G-Prism, especially G-Prism with dynamic boundaries, achieve a better classification performance compared with Prism

    A rule-based classifier with accurate and fast rule term induction for continuous attributes

    Get PDF
    Rule-based classifiers are considered more expressive, human readable and less prone to over-fitting compared with decision trees, especially when there is noise in the data. Furthermore, rule-based classifiers do not suffer from the replicated subtree problem as classifiers induced by top down induction of decision trees (also known as `Divide and Conquer'). This research explores some recent developments of a family of rule-based classifiers, the Prism family and more particular G-Prism-FB and G-Prism-DB algorithms, in terms of local discretisation methods used to induce rule terms for continuous data. The paper then proposes a new algorithm of the Prism family based on a combination of Gauss Probability Density Distribution (GPDD), InterQuartile Range (IQR) and data transformation methods. This new rule-based algorithm, termed G-Rules-IQR, is evaluated empirically and outperforms other members of the Prism family in execution time, accuracy and tentative accuracy

    ReG-Rules: an explainable rule-based ensemble learner for classification

    Get PDF
    The learning of classification models to predict class labels of new and previously unseen data instances is one of the most essential tasks in data mining. A popular approach to classification is ensemble learning, where a combination of several diverse and independent classification models is used to predict class labels. Ensemble models are important as they tend to improve the average classification accuracy over any member of the ensemble. However, classification models are also often required to be explainable to reduce the risk of irreversible wrong classification. Explainability of classification models is needed in many critical applications such as stock market analysis, credit risk evaluation, intrusion detection, etc. Unfortunately, ensemble learning decreases the level of explainability of the classification, as the analyst would have to examine many decision models to gain insights about the causality of the prediction. The aim of the research presented in this paper is to create an ensemble method that is explainable in the sense that it presents the human analyst with a conditioned view of the most relevant model aspects involved in the prediction. To achieve this aim the authors developed a rule-based explainable ensemble classifier termed Ranked ensemble G-Rules (ReG-Rules) which gives the analyst an extract of the most relevant classification rules for each individual prediction. During the evaluation process ReG-Rules was evaluated in terms of its theoretical computational complexity, empirically on benchmark datasets and qualitatively with respect to the complexity and readability of the induced rule sets. The results show that ReG-Rules scales linearly, delivers a high accuracy and at the same time delivers a compact and manageable set of rules describing the predictions made

    New Archive-Based Ant Colony Optimization Algorithms for Learning Predictive Rules from Data

    Get PDF
    Data mining is the process of extracting knowledge and patterns from data. Classification and Regression are among the major data mining tasks, where the goal is to predict a value of an attribute of interest for each data instance, given the values of a set of predictive attributes. Most classification and regression problems involve continuous, ordinal and categorical attributes. Currently Ant Colony Optimization (ACO) algorithms have focused on directly handling categorical attributes only; continuous attributes are transformed using a discretisation procedure in either a preprocessing stage or dynamically during the rule creation. The use of a discretisation procedure has several limitations: (i) it increases the computational runtime, since several candidates values need to evaluated; (ii) requires access to the entire attribute domain, which in some applications all data is not available; (iii) the values used to create discrete intervals are not optimised in combination with the values of other attributes. This thesis investigates the use of solution archive pheromone model, based on Ant Colony Optimization for mixed-variable (ACOMV) algorithm, to directly cope with all attribute types. Firstly, an archive-based ACO classification algorithm is presented, followed by an automatic design framework to generate new configuration of ACO algorithms. Then, we addressed the challenging problem of mining data streams, presenting a new ACO algorithm in combination with a hybrid pheromone model. Finally, the archive-based approach is extended to cope with regression problems. All algorithms presented are compared against well-known algorithms from the literature using publicly available data sets. Our results have been shown to improve the computational time while maintaining a competitive predictive performance

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Fuzzy set covering as a new paradigm for the induction of fuzzy classification rules

    Get PDF
    In 1965 Lofti A. Zadeh proposed fuzzy sets as a generalization of crisp (or classic) sets to address the incapability of crisp sets to model uncertainty and vagueness inherent in the real world. Initially, fuzzy sets did not receive a very warm welcome as many academics stood skeptical towards a theory of imprecise'' mathematics. In the middle to late 1980's the success of fuzzy controllers brought fuzzy sets into the limelight, and many applications using fuzzy sets started appearing. In the early 1970's the first machine learning algorithms started appearing. The AQ family of algorithms pioneered by Ryszard S. Michalski is a good example of the family of set covering algorithms. This class of learning algorithm induces concept descriptions by a greedy construction of rules that describe (or cover) positive training examples but not negative training examples. The learning process is iterative, and in each iteration one rule is induced and the positive examples covered by the rule removed from the set of positive training examples. Because positive instances are separated from negative instances, the term separate-and-conquer has been used to contrast the learning strategy against decision tree induction that use a divide-and-conquer learning strategy. This dissertation proposes fuzzy set covering as a powerful rule induction strategy. We survey existing fuzzy learning algorithms, and conclude that very few fuzzy learning algorithms follow a greedy rule construction strategy and no publications to date made the link between fuzzy sets and set covering explicit. We first develop the theoretical aspects of fuzzy set covering, and then apply these in proposing the first fuzzy learning algorithm that apply set covering and make explicit use of a partial order for fuzzy classification rule induction. We also investigate several strategies to improve upon the basic algorithm, such as better search heuristics and different rule evaluation metrics. We then continue by proposing a general unifying framework for fuzzy set covering algorithms. We demonstrate the benefits of the framework and propose several further fuzzy set covering algorithms that fit within the framework. We compare fuzzy and crisp rule induction, and provide arguments in favour of fuzzy set covering as a rule induction strategy. We also show that our learning algorithms outperform other fuzzy rule learners on real world data. We further explore the idea of simultaneous concept learning in the fuzzy case, and continue to propose the first fuzzy decision list induction algorithm. Finally, we propose a first strategy for encoding the rule sets generated by our fuzzy set covering algorithms inside an equivalent neural network

    An investigation into the biosynthesis of proximicins

    Get PDF
    PhD ThesisThe proximicins are a family of three compounds – A-C – produced by two marine Actinomycete Verrucosispora strains – V. maris AB18-032 and V. sp. str. 37 - and are characterised by the presence of 2,4-disubstituted furan rings. Proximicins demonstrate cell-arresting and antimicrobial ability, making them interesting leads for clinical drug development. Proximicin research has been largely overshadowed by other Verrucosispora strain secondary metabolites (SM), and despite the publication of the V. maris AB18-032 draft, the enzymatic machinery responsible for their production has not been established. It has been noted in related research into a pyrrole-containing homolog – congocidine –due to the structural similarity exhibited, proximicins likely utilise a similar biosynthetic route. The initial aim of this research was to confirm the presumed pathway to proximicin biosynthesis. Following the sequencing, assembly and annotation of the second proximicin producer, Verrucosispora sp. str. MG37, and genome mining of V. maris AB18-032, no common clusters mimicked that of congocidine, casting doubt on the previously assumed analogous biosynthetic routes. A putative proximicin biosynthesis (ppb) cluster was identified, containing non-ribosomal peptide synthetase (NRPS) enzymes, exhibiting some homology with congocidine. NRPSsystems represent a network of interacting proteins, which act as a SM assembly line: crucially, adenylation (A)- domain enzymes act as the ‘gate-keeper’, determining which precursors are included into the elongating peptide. To elucidate the route to proximicins, activity characterisation of the four A-domains present in ppb cluster was attempted. The A-domain Ppb120 was shown to possess novel activity, demonstrating a high promiscuity towards heterocycle containing precursors, in addition to the absence of an apparent essential domain. This discovery refutes previous work outlining the core residues which dictate A-domain activity, while also presenting a facile route to novel heterocycle-containing compounds. Despite extensive work, A-domains ppb195 and ppb210, were ineffectively purified in the active form – informing future work into A-domains activity characterisation. Finally, the ppb220 A-domain which lies at the border of ppb, was inactive suggesting over-estimation of the cluster margins. To confirm ppb220 redundancy and confirm ppb boundaries, CRISPR/Cas gene editing studies were done. The gene responsible for the orange pigment of Verrucosispora strains was initially targeted and successfully deleted, and ppb studies commenced. The research here refutes the previously presumed route to proximicin biosynthesis; the ppb cluster instead comprises enzymes exhibiting unique activity and structure. The findings represent the foundations for allowing exploitation of chemistry exhibited within the proximicin family. The novelty exhibited can be utilised in the search for antimicrobial clinical leads, by allowing the production of compounds containing previously inaccessible heterocycle chemistry

    Plan Projection, Execution, and Learning for Mobile Robot Control

    Get PDF
    Most state-of-the-art hybrid control systems for mobile robots are decomposed into different layers. While the deliberation layer reasons about the actions required for the robot in order to achieve a given goal, the behavioral layer is designed to enable the robot to quickly react to unforeseen events. This decomposition guarantees a safe operation even in the presence of unforeseen and dynamic obstacles and enables the robot to cope with situations it was not explicitly programmed for. The layered design, however, also leaves us with the problem of plan execution. The problem of plan execution is the problem of arbitrating between the deliberation- and the behavioral layer. Abstract symbolic actions have to be translated into streams of local control commands. Simultaneously, execution failures have to be handled on an appropriate level of abstraction. It is now widely accepted that plan execution should form a third layer of a hybrid robot control system. The resulting layered architectures are called three-tiered architectures, or 3T architectures for short. Although many high level programming frameworks have been proposed to support the implementation of the intermediate layer, there is no generally accepted algorithmic basis for plan execution in three-tiered architectures. In this thesis, we propose to base plan execution on plan projection and learning and present a general framework for the self-supervised improvement of plan execution. This framework has been implemented in APPEAL, an Architecture for Plan Projection, Execution And Learning, which extends the well known RHINO control system by introducing an execution layer. This thesis contributes to the field of plan-based mobile robot control which investigates the interrelation between planning, reasoning, and learning techniques based on an explicit representation of the robot's intended course of action, a plan. In McDermott's terminology, a plan is that part of a robot control program, which the robot cannot only execute, but also reason about and manipulate. According to that broad view, a plan may serve many purposes in a robot control system like reasoning about future behavior, the revision of intended activities, or learning. In this thesis, plan-based control is applied to the self-supervised improvement of mobile robot plan execution
    • …
    corecore