8,015 research outputs found

    Interpretable Machine Learning - An Application Study Using the Munich Rent Index

    Full text link
    [EN] Interpretable machine learning helps to understand decisions of black box models and thus improves confidence in machine learning models. To use interpretable machine learning methods, a black box model is fitted first, and on top of this model-agnostic interpretable machine learning methods are applied.This paper analyses model-agnostic tools with regard to their global and local explainability. The methods are validated using a practical example of the estimation of the Munich rent index 2017.In order to explain global decisions of the machine learning model, the Morris method and average marginal effects are compared. Comparison criteria are performance, available R packages or easy interpretability of results. Local methods concern a specific observations. LIME and Shapley values have been selected as local methods for analysis in this paper. The winning global and local method were then implemented and visualized in a dashboard, which can be found at https://juliafried.shinyapps.io/MunichRentIndex/.In addition, the IML approach is compared with the model of the "original" Munich rent index 2017, which is based on simpler interpretable methods. This study shows that, model-agnostic methods provide explanations for machine learning models and the Munich rent index can be estimated with the IML approach. Model-agnostic interpretable machine learning offers enormous advantages because the underlying models are interchangeable and complex patterns in data can be explained globally and locally.Brosig, J. (2020). Interpretable Machine Learning - An Application Study Using the Munich Rent Index. Editorial Universitat Politècnica de València. 1-36. http://hdl.handle.net/10251/148628OCS34034

    Explaining Hate Speech Classification with Model Agnostic Methods

    Full text link
    There have been remarkable breakthroughs in Machine Learning and Artificial Intelligence, notably in the areas of Natural Language Processing and Deep Learning. Additionally, hate speech detection in dialogues has been gaining popularity among Natural Language Processing researchers with the increased use of social media. However, as evidenced by the recent trends, the need for the dimensions of explainability and interpretability in AI models has been deeply realised. Taking note of the factors above, the research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision. This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach for explainability and to prevent model bias. The bidirectional transformer model BERT has been used for prediction because of its state of the art efficiency over other Machine Learning models. The model agnostic algorithm LIME generates explanations for the output of a trained classifier and predicts the features that influence the model decision. The predictions generated from the model were evaluated manually, and after thorough evaluation, we observed that the model performs efficiently in predicting and explaining its prediction. Lastly, we suggest further directions for the expansion of the provided research work.Comment: 15 pages Accepted paper from Text Mining Workshop at KI 202

    PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics

    Full text link
    PiML (read π\pi-ML, /`pai`em`el/) is an integrated and open-access Python toolbox for interpretable machine learning model development and model diagnostics. It is designed with machine learning workflows in both low-code and high-code modes, including data pipeline, model training and tuning, model interpretation and explanation, and model diagnostics and comparison. The toolbox supports a growing list of interpretable models (e.g. GAM, GAMI-Net, XGB1/XGB2) with inherent local and/or global interpretability. It also supports model-agnostic explainability tools (e.g. PFI, PDP, LIME, SHAP) and a powerful suite of model-agnostic diagnostics (e.g. weakness, reliability, robustness, resilience, fairness). Integration of PiML models and tests to existing MLOps platforms for quality assurance are enabled by flexible high-code APIs. Furthermore, PiML toolbox comes with a comprehensive user guide and hands-on examples, including the applications for model development and validation in banking. The project is available at https://github.com/SelfExplainML/PiML-Toolbox

    Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

    Full text link
    Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models. While these interpretation methods can be applied regardless of model complexity, they can produce misleading and verbose results if the model is too complex, especially w.r.t. feature interactions. To quantify the complexity of arbitrary machine learning models, we propose model-agnostic complexity measures based on functional decomposition: number of features used, interaction strength and main effect complexity. We show that post-hoc interpretation of models that minimize the three measures is more reliable and compact. Furthermore, we demonstrate the application of these measures in a multi-objective optimization approach which simultaneously minimizes loss and complexity
    • …
    corecore