926 research outputs found

    Explainable Artificial Intelligence Methods in FinTech Applications

    Get PDF
    The increasing amount of available data and access to high-performance computing allows companies to use complex Machine Learning (ML) models for their decision-making process, so-called ”black-box” models. These ”black-box” models typically show higher predictive accuracy than linear models on complex data sets. However, this improved predictive accuracy can only be achieved by deteriorating the explanatory power. ”Open the black box” and make the model predictions explainable is summarised under the research area of Explainable Artificial Intelligence (XAI). Using black-box models also raises practical and ethical issues, especially in critical industries such as finance. For this reason, the explainability of models is increasingly becoming a focus for regulators. Applying XAI methods to ML models makes their predictions explainable and hence, enables the application of ML models in the financial industries. The application of ML models increases predictive accuracy and supports the different stakeholders in the financial industries in their decision-making processes. This thesis consists of five chapters: a general introduction, a chapter on conclusions and future research, and three separate chapters covering the underlying papers. Chapter 1 proposes an XAI method that can be used in credit risk management, in particular, in measuring the risks associated with borrowing through peer-to-peer lending platforms. The model applies correlation networks to Shapley values and thus the model predictions are grouped according to the similarity of the underlying explanations. Chapter 2 develops an alternative XAI method based on the Lorenz Zonoid approach. The new method is statistically normalised and can therefore be used as a standard for the application of Artificial Intelligence (AI) in credit risk management. The novel ”Shapley-Lorenz”-approach can facilitate the validation of model results and supports the decision whether a model is sufficiently explained. In Chapter 3, an XAI method is applied to assess the impact of financial and non-financial factors on a firm’s ex-ante cost of capital, a measure that reflects investors’ perceptions of a firm’s risk appetite. A combination of two explanatory tools: the Shapley values and the Lorenz model selection approach, enabled the identification of the most important features and the reduction of the independent features. This allowed a substantial simplification of the model without a statistically significant decrease in predictive accuracy.The increasing amount of available data and access to high-performance computing allows companies to use complex Machine Learning (ML) models for their decision-making process, so-called ”black-box” models. These ”black-box” models typically show higher predictive accuracy than linear models on complex data sets. However, this improved predictive accuracy can only be achieved by deteriorating the explanatory power. ”Open the black box” and make the model predictions explainable is summarised under the research area of Explainable Artificial Intelligence (XAI). Using black-box models also raises practical and ethical issues, especially in critical industries such as finance. For this reason, the explainability of models is increasingly becoming a focus for regulators. Applying XAI methods to ML models makes their predictions explainable and hence, enables the application of ML models in the financial industries. The application of ML models increases predictive accuracy and supports the different stakeholders in the financial industries in their decision-making processes. This thesis consists of five chapters: a general introduction, a chapter on conclusions and future research, and three separate chapters covering the underlying papers. Chapter 1 proposes an XAI method that can be used in credit risk management, in particular, in measuring the risks associated with borrowing through peer-to-peer lending platforms. The model applies correlation networks to Shapley values and thus the model predictions are grouped according to the similarity of the underlying explanations. Chapter 2 develops an alternative XAI method based on the Lorenz Zonoid approach. The new method is statistically normalised and can therefore be used as a standard for the application of Artificial Intelligence (AI) in credit risk management. The novel ”Shapley-Lorenz”-approach can facilitate the validation of model results and supports the decision whether a model is sufficiently explained. In Chapter 3, an XAI method is applied to assess the impact of financial and non-financial factors on a firm’s ex-ante cost of capital, a measure that reflects investors’ perceptions of a firm’s risk appetite. A combination of two explanatory tools: the Shapley values and the Lorenz model selection approach, enabled the identification of the most important features and the reduction of the independent features. This allowed a substantial simplification of the model without a statistically significant decrease in predictive accuracy

    Artificial intelligence in credit scoring : digitalization in the banking landscape in Germany

    Get PDF
    AI is rapidly transforming markets and challenging old business models. This dissertation examines the AI-readiness of German banks, specifically in credit scoring. For this purpose, three different data types were collected. A literature review shed light on the current credit system in Germany and, as a comparison, in China. Furthermore, expert interviews disclosed the potential chances and risks of AI-driven credit assessments. A quantitative survey complemented the expert opinions with those of potential users. The results indicated that the overall readiness of AI in the German credit sector is relatively low. Experts suggested that drivers to use this technology are risk optimization and cost reduction. The identified main barrier complicating the implementation stems from regulatory requirements. While advancements are low, the collected customer data showed that most survey participants would agree to an AI-driven creditworthiness assessment. A scenario analysis combined all collected insights and demonstrated potential future developments. From a management perspective, German banks need to be faster in their technological transformation, in order to not loose competitiveness in the future.A IA está a transformar rapidamente os mercados e a desafiar velhos modelos de negócio. Esta dissertação examina a prontidão da AI dos bancos alemães, especificamente na pontuação de crédito. Para este fim, foram recolhidos três tipos de dados diferentes. Uma análise bibliográfica lança luz sobre o actual sistema de crédito na Alemanha e, como comparação, na China. Além disso, entrevistas de peritos revelaram as potenciais hipóteses e riscos das avaliações de crédito orientadas para a gripe aviária. Um inquérito quantitativo complementou as opiniões dos peritos com as dos potenciais utilizadores. Os resultados indicaram que a prontidão geral da AI no sector de crédito alemão é relativamente baixa. Os peritos sugeriram que os factores que impulsionam a utilização desta tecnologia são a optimização do risco e a redução de custos. A principal barreira identificada que complica a implementação deriva de requisitos regulamentares. Embora os avanços sejam baixos, os dados recolhidos dos clientes mostraram que a maioria dos participantes no inquérito concordariam com uma avaliação de solvabilidade orientada para a gripe aviária. Uma análise de cenários combinou todas as percepções recolhidas e demonstrou potenciais desenvolvimentos futuros. De uma perspectiva de gestão, os bancos alemães precisam de ser mais rápidos na sua transformação tecnológica, a fim de não perderem competitividade no futuro

    Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2

    Get PDF
    Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now

    AI Lending and ECOA: Avoiding Accidental Discrimination

    Get PDF

    The AI Revolution: Opportunities and Challenges for the Finance Sector

    Full text link
    This report examines Artificial Intelligence (AI) in the financial sector, outlining its potential to revolutionise the industry and identify its challenges. It underscores the criticality of a well-rounded understanding of AI, its capabilities, and its implications to effectively leverage its potential while mitigating associated risks. The potential of AI potential extends from augmenting existing operations to paving the way for novel applications in the finance sector. The application of AI in the financial sector is transforming the industry. Its use spans areas from customer service enhancements, fraud detection, and risk management to credit assessments and high-frequency trading. However, along with these benefits, AI also presents several challenges. These include issues related to transparency, interpretability, fairness, accountability, and trustworthiness. The use of AI in the financial sector further raises critical questions about data privacy and security. A further issue identified in this report is the systemic risk that AI can introduce to the financial sector. Being prone to errors, AI can exacerbate existing systemic risks, potentially leading to financial crises. Regulation is crucial to harnessing the benefits of AI while mitigating its potential risks. Despite the global recognition of this need, there remains a lack of clear guidelines or legislation for AI use in finance. This report discusses key principles that could guide the formation of effective AI regulation in the financial sector, including the need for a risk-based approach, the inclusion of ethical considerations, and the importance of maintaining a balance between innovation and consumer protection. The report provides recommendations for academia, the finance industry, and regulators

    Artificial Intelligence, Machine Learning, and Bias in Finance: Toward Responsible Innovation

    Get PDF
    According to some futurists, financial markets’ automation will substitute increasingly sophisticated, objective, analytical, model-based assessments of, for example, a borrower’s creditworthiness for direct human evaluations irrevocably tainted by bias and subject to the cognitive limits of the human brain. However, even if they do occur, such advances may violate other legal principles
    corecore