64 research outputs found
Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2
Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now
Artificial Intelligence and Bank Soundness: A Done Deal? - Part 1
Banks soundness plays a crucial role in determining economic prosperity. As such, banks are under intense scrutiny to make wise decisions that enhances bank stability. Artificial Intelligence (AI) plays a significant role in changing the way banks operate and service their customers. Banks are becoming more modern and relevant in people’s life as a result. The most significant contribution of AI is it provides a lifeline for bank’s survival. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions opportunities from the main strand of CAMELS into distinct categories of 1 (C), 6(A), 17(M), 16 (E), 3(L), 6(S). It is highly evident that banks will soon extinct if they do not embed AI into their operations. As such, AI is a done deal for banks. Yet will AI contribute to bank soundness remains to be seen
The AI Revolution: Opportunities and Challenges for the Finance Sector
This report examines Artificial Intelligence (AI) in the financial sector,
outlining its potential to revolutionise the industry and identify its
challenges. It underscores the criticality of a well-rounded understanding of
AI, its capabilities, and its implications to effectively leverage its
potential while mitigating associated risks. The potential of AI potential
extends from augmenting existing operations to paving the way for novel
applications in the finance sector. The application of AI in the financial
sector is transforming the industry. Its use spans areas from customer service
enhancements, fraud detection, and risk management to credit assessments and
high-frequency trading. However, along with these benefits, AI also presents
several challenges. These include issues related to transparency,
interpretability, fairness, accountability, and trustworthiness. The use of AI
in the financial sector further raises critical questions about data privacy
and security. A further issue identified in this report is the systemic risk
that AI can introduce to the financial sector. Being prone to errors, AI can
exacerbate existing systemic risks, potentially leading to financial crises.
Regulation is crucial to harnessing the benefits of AI while mitigating its
potential risks. Despite the global recognition of this need, there remains a
lack of clear guidelines or legislation for AI use in finance. This report
discusses key principles that could guide the formation of effective AI
regulation in the financial sector, including the need for a risk-based
approach, the inclusion of ethical considerations, and the importance of
maintaining a balance between innovation and consumer protection. The report
provides recommendations for academia, the finance industry, and regulators
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
As systems based on opaque Artificial Intelligence (AI) continue to flourish
in diverse real-world applications, understanding these black box models has
become paramount. In response, Explainable AI (XAI) has emerged as a field of
research with practical and ethical benefits across various domains. This paper
not only highlights the advancements in XAI and its application in real-world
scenarios but also addresses the ongoing challenges within XAI, emphasizing the
need for broader perspectives and collaborative efforts. We bring together
experts from diverse fields to identify open problems, striving to synchronize
research agendas and accelerate XAI in practical applications. By fostering
collaborative discussion and interdisciplinary cooperation, we aim to propel
XAI forward, contributing to its continued success. Our goal is to put forward
a comprehensive proposal for advancing XAI. To achieve this goal, we present a
manifesto of 27 open problems categorized into nine categories. These
challenges encapsulate the complexities and nuances of XAI and offer a road map
for future research. For each problem, we provide promising research directions
in the hope of harnessing the collective intelligence of interested
stakeholders
Causal-Aware Generative Imputation for Automated Underwriting
Underwriting is an important process in insurance and is concerned with accepting individuals into insurance policy with tolerable claim risk. Underwriting is a tedious and labor intensive process relying on underwriters' domain knowledge and experience, thus is labor intensive and prone to error. Machine learning models are recently applied to automate the underwriting process and thus to ease the burden on the underwriters as well as improve underwriting accuracy. However, observational data used for underwriting modelling is high dimensional, sparse and incomplete, due to the dynamic evolving nature (e.g., upgrade) of business information systems. Simply applying traditional supervised learning methods e.g., logistic regression or Gradient boosting on such highly incomplete data usually leads to the unsatisfactory underwriting result, thus requiring practical data imputation for training quality improvement. In this paper, rather than choosing off-the-shelf solutions tackling the complex data missing problem, we propose an innovative Generative Adversarial Nets (GAN) framework that can capture the missing pattern from a causal perspective. Specifically, we design a structural causal model to learn the causal relations underlying the missing pattern of data. Then, we devise a Causality-aware Generative network (CaGen) using the learned causal relationship prior to generating missing values, and correct the imputed values via the adversarial learning. We also show that CaGen significantly improves the underwriting prediction in real-world insurance applications
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
Belief rule-base expert system with multilayer tree structure for complex problems modeling
Belief rule-base (BRB) expert system is one of recognized and fast-growing approaches in the areas of complex problems modeling. However, the conventional BRB has to suffer from the combinatorial explosion problem since the number of rules in BRB expands exponentially with the number of attributes in complex problems, although many alternative techniques have been looked at with the purpose of downsizing BRB. Motivated by this challenge, in this paper, multilayer tree structure (MTS) is introduced for the first time to define hierarchical BRB, also known as MTS-BRB. MTS- BRB is able to overcome the combinatorial explosion problem of the conventional BRB. Thereafter, the additional modeling, inferencing, and learning procedures are proposed to create a self-organized MTS-BRB expert system. To demonstrate the development process and benefits of the MTS-BRB expert system, case studies including benchmark classification datasets and research and development (R&D) project risk assessment have been done. The comparative results showed that, in terms of modelling effectiveness and/or prediction accuracy, MTS-BRB expert system surpasses various existing, as well as traditional fuzzy system-related and machine learning-related methodologie
- …