8 research outputs found

    Automated reasoning for explainable artificial intelligence

    Get PDF
    Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This short paper is a non-technical position statement that aims at prompting a discussion of the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligence. We suggest that the emergence of the new paradigm of XAI, that stands for eXplainable Artificial Intelligence, is an opportunity for rethinking these relationships, and that XAI may offer a grand challenge for future research on automated reasoning

    POTENTIALS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE IN FINANCIAL TECHNOLOGIES

    Get PDF
    Artificial Intelligence (AI) made disruptive progress over the last years, becoming a key technology across industries. In particular, AI offers novel distinctive opportunities for intelligent services in fi-nancial technology companies (Financial technologies). However, given the opportunities of AI and its associated benefits, the question arises why financial technologies fail to leverage the full potential of AI. Drawing on existing literature, this paper elaborates on the opportunities and risks associated with AI in the financial sector. This paper makes two key contributions: First, we discover the present challenges in literature to demonstrate the need for explainable AI. Second, we reveal the lack of guidance for applying explainable AI in financial technologies. We derive recommendations for re-search, policy, and practice and argue for the increased elaboration of legal frameworks for the re-sponsible use of AI

    Analyzing Machine Learning Models for Credit Scoring with Explainable AI and Optimizing Investment Decisions

    Full text link
    This paper examines two different yet related questions related to explainable AI (XAI) practices. Machine learning (ML) is increasingly important in financial services, such as pre-approval, credit underwriting, investments, and various front-end and back-end activities. Machine Learning can automatically detect non-linearities and interactions in training data, facilitating faster and more accurate credit decisions. However, machine learning models are opaque and hard to explain, which are critical elements needed for establishing a reliable technology. The study compares various machine learning models, including single classifiers (logistic regression, decision trees, LDA, QDA), heterogeneous ensembles (AdaBoost, Random Forest), and sequential neural networks. The results indicate that ensemble classifiers and neural networks outperform. In addition, two advanced post-hoc model agnostic explainability techniques - LIME and SHAP are utilized to assess ML-based credit scoring models using the open-access datasets offered by US-based P2P Lending Platform, Lending Club. For this study, we are also using machine learning algorithms to develop new investment models and explore portfolio strategies that can maximize profitability while minimizing risk

    Explainable AI for Interpretable Credit Scoring

    Full text link
    With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.Comment: 19 pages, David C. Wyld et al. (Eds): ACITY, DPPR, VLSI, WeST, DSA, CNDC, IoTE, AIAA, NLPTA - 202

    Automated Reasoning for Explainable Artificial Intelligence

    No full text
    corecore