8 research outputs found

    Investigating the Impact of Independent Rule Fitnesses in a Learning Classifier System

    Get PDF
    Achieving at least some level of explainability requires complex analyses for many machine learning systems, such as common black-box models. We recently proposed a new rule-based learning system, SupRB, to construct compact, interpretable and transparent models by utilizing separate optimizers for the model selection tasks concerning rule discovery and rule set composition.This allows users to specifically tailor their model structure to fulfil use-case specific explainability requirements. From an optimization perspective, this allows us to define clearer goals and we find that -- in contrast to many state of the art systems -- this allows us to keep rule fitnesses independent. In this paper we investigate this system's performance thoroughly on a set of regression problems and compare it against XCSF, a prominent rule-based learning system. We find the overall results of SupRB's evaluation comparable to XCSF's while allowing easier control of model structure and showing a substantially smaller sensitivity to random seeds and data splits. This increased control can aid in subsequently providing explanations for both training and final structure of the model.Comment: arXiv admin note: substantial text overlap with arXiv:2202.0167

    Approaches for rule discovery in a learning classifier system

    Get PDF
    To fill the increasing demand for explanations of decisions made by automated prediction systems, machine learning (ML) techniques that produce inherently transparent models are directly suited. Learning Classifier Systems (LCSs), a family of rule-based learners, produce transparent models by design. However, the usefulness of such models, both for predictions and analyses, heavily depends on the placement and selection of rules (combined constituting the ML task of model selection). In this paper, we investigate a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated in contrast to other LCSs where these tasks sometimes blend. We compare a Random Search, (1,λ)-ES and three Novelty Search variants. We find that there is a definitive need to guide the search based on some sensible criteria, i.e. error and generality, rather than just placing rules randomly and selecting better performing ones but also find that Novelty Search variants do not beat the easier to understand (1,λ)-ES

    Discovering rules for rule-based machine learning with the help of novelty search

    Get PDF
    Automated prediction systems based on machine learning (ML) are employed in practical applications with increasing frequency and stakeholders demand explanations of their decisions. ML algorithms that learn accurate sets of rules, such as learning classifier systems (LCSs), produce transparent and human-readable models by design. However, whether such models can be effectively used, both for predictions and analyses, strongly relies on the optimal placement and selection of rules (in ML this task is known as model selection). In this article, we broaden a previous analysis on a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific pre-existing LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated—in contrast to other LCSs where these tasks sometimes blend. We compare two baselines, random search and -evolution strategy (ES), with six novelty search variants: three novelty-/fitness weighing variants and for each of those two differing approaches on the usage of the archiving mechanism. We find that random search is not sufficient and sensible criteria, i.e., error and generality, are indeed needed. However, we cannot confirm that the more complicated-to-explain novelty search variants would provide better results than -ES which allows a good balance between low error and low complexity in the resulting models

    Separating Rule Discovery and Global Solution Composition in a Learning Classifier System

    Get PDF
    The utilization of digital agents to support crucial decision making is increasing in many industrial scenarios. However, trust in suggestions made by these agents is hard to achieve, though essential for profiting from their application, resulting in a need for explanations for both the decision making process as well as the model itself. For many systems, such as common deep learning black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. In this paper we propose an easily interpretable rule-based learning system specifically designed and thus especially suited for these scenarios and compare it on a set of regression problems against XCSF, a prominent rule-based learning system with a long research history. One key advantage of our system is that the rules' conditions and which rules compose a solution to the problem are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability. We find that the results of SupRB2's evaluation are comparable to XCSF's while allowing easier control of model structure and showing a substantially smaller sensitivity to random seeds and data splits. This increased control aids in subsequently providing explanations for both the training and the final structure of the model
    corecore