72,950 research outputs found
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
Improving Palliative Care with Deep Learning
Improving the quality of end-of-life care for hospitalized patients is a
priority for healthcare organizations. Studies have shown that physicians tend
to over-estimate prognoses, which in combination with treatment inertia results
in a mismatch between patients wishes and actual care at the end of life. We
describe a method to address this problem using Deep Learning and Electronic
Health Record (EHR) data, which is currently being piloted, with Institutional
Review Board approval, at an academic medical center. The EHR data of admitted
patients are automatically evaluated by an algorithm, which brings patients who
are likely to benefit from palliative care services to the attention of the
Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR
data from previous years, to predict all-cause 3-12 month mortality of patients
as a proxy for patients that could benefit from palliative care. Our
predictions enable the Palliative Care team to take a proactive approach in
reaching out to such patients, rather than relying on referrals from treating
physicians, or conduct time consuming chart reviews of all patients. We also
present a novel interpretation technique which we use to provide explanations
of the model's predictions.Comment: IEEE International Conference on Bioinformatics and Biomedicine 201
PerfXplain: Debugging MapReduce Job Performance
While users today have access to many tools that assist in performing large
scale data analysis tasks, understanding the performance characteristics of
their parallel computations, such as MapReduce jobs, remains difficult. We
present PerfXplain, a system that enables users to ask questions about the
relative performances (i.e., runtimes) of pairs of MapReduce jobs. PerfXplain
provides a new query language for articulating performance queries and an
algorithm for generating explanations from a log of past MapReduce job
executions. We formally define the notion of an explanation together with three
metrics, relevance, precision, and generality, that measure explanation
quality. We present the explanation-generation algorithm based on techniques
related to decision-tree building. We evaluate the approach on a log of past
executions on Amazon EC2, and show that our approach can generate quality
explanations, outperforming two naive explanation-generation methods.Comment: VLDB201
- …