13,352 research outputs found

    Toward Efficient Automation of Interpretable Machine Learning Boosting

    Get PDF
    Developing efficient automated methods for Interpretable Machine Learning (IML) is an important and long-term goal in the field of Artificial Intelligence. Currently the Machine Learning landscape is dominated by Neural Networks (NNs) and Support Vector Machines (SVMs), models which are often highly accurate. Despite high accuracy, such models are essentially “black boxes” and therefore are too risky for situations like healthcare where real lives are at stake. In such situations, so called “glass-box” models, such as Decision Trees (DTs), Bayesian Networks (BNs), and Logic Relational (LR) models are often preferred, however can succumb to accuracy limitations. Unfortunately, having to choose between an algorithm that is accurate or interpretable—but not both—has become a major obstacle in the wider adoption of Machine Learning. Previous research has proposed increasing interpretability of black-box models by degrading model complexity, often degrading accuracy as a consequence. By taking the opposite approach and improving the accuracy of interpretable models, rather than improving the interpretability of accurate black-box models, it’s possible to construct “competitive glass-boxes” via two novel algorithms proposed in this research: Dominance Classifier Predictor (DCP) and Reverse Prediction Pattern Recognition (RPPR). Experiments DCP boosted by RPPR have been conducted on several benchmark datasets, successfully raising the accuracy of interpretable models to reach the accuracy of black-box models
    • …
    corecore