15,917 research outputs found
Explainable AI for Interpretable Credit Scoring
With the ever-growing achievements in Artificial Intelligence (AI) and the
recent boosted enthusiasm in Financial Technology (FinTech), applications such
as credit scoring have gained substantial academic interest. Credit scoring
helps financial experts make better decisions regarding whether or not to
accept a loan application, such that loans with a high probability of default
are not accepted. Apart from the noisy and highly imbalanced data challenges
faced by such credit scoring models, recent regulations such as the `right to
explanation' introduced by the General Data Protection Regulation (GDPR) and
the Equal Credit Opportunity Act (ECOA) have added the need for model
interpretability to ensure that algorithmic decisions are understandable and
coherent. An interesting concept that has been recently introduced is
eXplainable AI (XAI), which focuses on making black-box models more
interpretable. In this work, we present a credit scoring model that is both
accurate and interpretable. For classification, state-of-the-art performance on
the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is
achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then
further enhanced with a 360-degree explanation framework, which provides
different explanations (i.e. global, local feature-based and local
instance-based) that are required by different people in different situations.
Evaluation through the use of functionallygrounded, application-grounded and
human-grounded analysis show that the explanations provided are simple,
consistent as well as satisfy the six predetermined hypotheses testing for
correctness, effectiveness, easy understanding, detail sufficiency and
trustworthiness.Comment: 19 pages, David C. Wyld et al. (Eds): ACITY, DPPR, VLSI, WeST, DSA,
CNDC, IoTE, AIAA, NLPTA - 202
Capturing Usersâ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations
The opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase usersâ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation
Explainable Information Retrieval: A Survey
Explainable information retrieval is an emerging research area aiming to make
transparent and trustworthy information retrieval systems. Given the increasing
use of complex machine learning models in search systems, explainability is
essential in building and auditing responsible information retrieval models.
This survey fills a vital gap in the otherwise topically diverse literature of
explainable information retrieval. It categorizes and discusses recent
explainability methods developed for different application domains in
information retrieval, providing a common framework and unifying perspectives.
In addition, it reflects on the common concern of evaluating explanations and
highlights open challenges and opportunities.Comment: 35 pages, 10 figures. Under revie
- âŠ