Research directions in interpretable machine learning models


Abstract. The theoretical novelty of many machine learning methods leading to high performing algorithms has been substantial. However, the black-box nature of much of this body of work has meant that the models are difficult to interpret, with the consequence that the significant developments in machine learning theory are not matched by their practical impact. This tutorial stresses the need for interpretation and outlines the current status and future directions of interpretability in machine learning models. 1 Why interpretation and visualization in machine learning? The above question directly corresponds in many applications to asking – why should machine learning methods be useful in practice? While there are many publications in this huge and significant field of learning, real-world applications are much fewer, especially in safety-critical domains. What are the reasons for this? How can flexible non-linear models be interpreted? Alternatively, given that there are different ways of articulating a flexible regression or classification model, can machine learning models be designed so that they are directly interpretable by construction? Is interpretation i

Similar works

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.