653 research outputs found

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Is There a Conservative Case Against Racial Profiling?

    Get PDF
    This article analyzes contentions that a politically conservative case can be made against racial profiling

    Improving the Classification of Multiple Disorders with Problem Decomposition

    Get PDF
    Abstract Differential diagnosis of multiple disorders is a challenging problem in clinical medicine. According to the divide-and-conquer principle, this problem can be handled more effectively through decomposing it into a number of simpler sub-problems, each solved separately. We demonstrate the advantages of this approach using abductive network classifiers on the 6-class standard dermatology dataset. Three problem decomposition scenarios are investigated, including class decomposition and two hierarchical approaches based on clinical practice and class separability properties. Two-stage classification schemes based on hierarchical decomposition boost the classification accuracy from 91% for the single-classifier monolithic approach to 99%, matching the theoretical upper limit reported in the literature for the accuracy of classifying the dataset. Such models are also simpler, achieving up to 47% reduction in the number of input variables required, thus reducing the cost and improving the convenience of performing the medical diagnostic tests required. Automatic selection of only relevant inputs by the simpler abductive network models synthesized provides greater insight into the diagnosis problem and the diagnostic value of various disease markers. The problem decomposition approach helps plan more efficient diagnostic tests and provides improved support for the decision making process. Findings are compared with established guidelines of clinical practice, results of data analysis, and outcomes of previous informatics-based studies on the dataset. Keywords: Classifiers, Abductive Networks, Neural Networks, Problem Decomposition, Divide and Conquer, Classification Accuracy, Data Reduction, Modular Networks, Medical Diagnosis, Multiple Disorders, Dermatology.

    Logistic model tree extraction from artificial neural networks

    Get PDF
    Artificial neural networks (ANNs) are a powerful and widely used pattern recognition technique. However, they remain “black boxes” giving no explanation for the decisions they make. This paper presents a new algorithm for extracting a logistic model tree (LMT) from a neural network, which gives a symbolic representation of the knowledge hidden within the ANN. Landwehr’s LMTs are based on standard decision trees, but the terminal nodes are replaced with logistic regression functions. This paper reports the results of an empirical evaluation that compares the new decision tree extraction algorithm with Quinlan’s C4.5 and ExTree. The evaluation used 12 standard benchmark datasets from the University of California, Irvine machine-learning repository. The results of this evaluation demonstrate that the new algorithm produces decision trees that have higher accuracy and higher fidelity than decision trees created by both C4.5 and ExTree

    Kin term patterns and language familie

    Get PDF
    Kin term patterns and language familieThe anthropologist G. P. Murdock has found a strong correlation between the kin term patterns (or feature-values) for the relative (feature) “sibling” and language families. This important finding for language classification, however, has not been pursued further. In particular, it has not yet been tested whether the kin term patterns domain as a whole, including the patterns for other features (“grandparents”, “uncles”, “aunts”, “nephews and nieces”, etc.), is sufficient to demarcate all language families from one another. This paper presents a large-scale computational profiling of all language families in terms of their kin term patterns. The most significant findings are: (i) that language families can be quite neatly differentiated on the basis of their kin term patterns, and therefore these patterns may be considered as strong indicators of genetic affiliation, and (ii) that the kin term patterns for the features “nephews and nieces (= siblings' children)”, “siblings”, and “siblings-in-law” — i.e. all features including the idea of siblings — are the best predictors of genetic affiliation, as they are significantly more frequently used in the profiles than any other feature.  Modele terminologii powinowactwa i pokrewieństwa a rodzina językowaAntropolog G. P. Murdock odkrył silny związek między modelami powinowactwa i pokrewieństwa dla krewnych (rodzeństwa) a rodzinami językowymi. To ważne odkrycie w klasyfikacji języków nie było odpowiednio dalej wykorzystywane. W szczególności nie sprawdzono jeszcze, czy domena modeli powinowactwa i pokrewieństwa jako całości, w tym modele dla innych cech („dziadkowie”, „wujkowie”, „ciotki”, „siostrzeńcy i siostrzenice” itp.) są wystarczająco ukształtowane we wszystkich rodzinach językowych. W niniejszym artykule przedstawiono profilowanie komputerowe na dużą skalę wszystkich rodzin językowych pod kątem ich modeli powinowactwa i pokrewieństwa. Najważniejsze wnioski są następujące: (i) rodziny językowe mogą być zróżnicowane na podstawie modeli powinowactwa i pokrewieństwa, dlatego też te modele mogą być uważane za silne wskaźniki genetycznej przynależności, oraz (ii) modele dla cechy „siostrzeńcy i siostrzenice (= dzieci rodzeństwa)”, „rodzeństwo” i „rodzeństwo przyrodnie” - czyli wszystkie cechy, w tym rodzeństwa - są najlepszymi przesłankami genetycznej przynależności, ponieważ są znacznie częściej używane w profilach niż jakakolwiek inna funkcja

    DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System

    Full text link
    A key challenge in eXplainable Artificial Intelligence is the well-known tradeoff between the transparency of an algorithm (i.e., how easily a human can directly understand the algorithm, as opposed to receiving a post-hoc explanation), and its accuracy. We report on the design of a new deep network that achieves improved transparency without sacrificing accuracy. We design a deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy logic and deep learning models and show that DCNFIS performs as accurately as three existing convolutional neural networks on four well-known datasets. We furthermore that DCNFIS outperforms state-of-the-art deep fuzzy systems. We then exploit the transparency of fuzzy logic by deriving explanations, in the form of saliency maps, from the fuzzy rules encoded in DCNFIS. We investigate the properties of these explanations in greater depth using the Fashion-MNIST dataset
    corecore