44,702 research outputs found

    Learning Interpretable Rules for Multi-label Classification

    Full text link
    Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously. In this chapter, we advocate a rule-based approach to multi-label classification. Rule learning algorithms are often employed when one is not only interested in accurate predictions, but also requires an interpretable theory that can be understood, analyzed, and qualitatively evaluated by domain experts. Ideally, by revealing patterns and regularities contained in the data, a rule-based theory yields new insights in the application domain. Recently, several authors have started to investigate how rule-based models can be used for modeling multi-label data. Discussing this task in detail, we highlight some of the problems that make rule learning considerably more challenging for MLC than for conventional classification. While mainly focusing on our own previous work, we also provide a short overview of related work in this area.Comment: Preprint version. To appear in: Explainable and Interpretable Models in Computer Vision and Machine Learning. The Springer Series on Challenges in Machine Learning. Springer (2018). See http://www.ke.tu-darmstadt.de/bibtex/publications/show/3077 for further informatio

    Knowledge Discovery From Sensitive Data: Differentially Private Bayesian Rule Lists

    Get PDF
    The utility of machine learning is rising, coming from a growing wealth of data and problems that are becoming harder to solve analytically. With these changes there is also the need for interpretable machine learning in order for users to understand how a machine learning algorithm comes to a specific output. Bayesian Rule Lists, an interpretable machine learning algorithm, offers an advanced accuracy to interpretabilty trade off when compared to other interpretable machine learning algorithms. Additionally, with the amount of data collected today, there is a lot of potentially sensitive data that we can learn from such as medical and criminal records. However, to do so, we must guarantee a degree of privacy on the dataset; differential privacy has become the standard for this private data analysis. In this paper, we propose a differentially private algorithm for Bayesian Rule Lists. We first break down the original Bayesian Rule List algorithm into three main components: frequent itemset mining, rule list sampling, and point estimate computation. We then perform a literature review to understand these algorithms, and ways to privatize them. There after we computed the necessary sensitivities for all subroutines, and ran experiments on the resulting differentially private algorithm to gauge utility. Results show that the proposed algorithm is able to output rule lists with good accuracy and decent interpretability

    Towards Interpretable Explanations for Transfer Learning in Sequential Tasks

    Get PDF
    People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user’s ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care
    corecore