4,430 research outputs found

    A credit risk model with small sample data based on G-XGBoost

    Get PDF
    Currently existing credit risk models, e.g., Scoring Card and Extreme Gradient Boosting (XGBoost), usually have requirements for the capacity of modeling samples. The small sample size may result in the adverse outcomes for the trained models which may neither achieve the expected accuracy nor distinguish risks well. On the other hand, data acquisition can be difficult and restricted due to data protection regulations. In view of the above dilemma, this paper applies Generative Adversarial Nets (GAN) to the construction of small and micro enterprises (SMEs) credit risk model, and proposes a novel training method, namely G-XGBoost, based on the XGBoost model. A few batches of real data are selected to train GAN. When the generative network reaches Nash equilibrium, the network is used to generate pseudo data with the same distribution. The pseudo data is then combined with real data to form an amplified sample set. The amplified sample set is used to train XGBoost for credit risk prediction. The feasibility and advantages of the G-XGBoost model are demonstrated by comparing with the XGBoost model

    Machine learning for personal credit evaluation: A systematic review

    Get PDF
    The importance of information in today's world as it is a key asset for business growth and innovation. The problem that arises is the lack of understanding of knowledge quality properties, which leads to the development of inefficient knowledge-intensive systems. But knowledge cannot be shared effectively without effective knowledge-intensive systems. Given this situation, the authors must analyze the benefits and believe that machine learning can benefit knowledge management and that machine learning algorithms can further improve knowledge-intensive systems. It also shows that machine learning is very helpful from a practical point of view. Machine learning not only improves knowledge-intensive systems but has powerful theoretical and practical implementations that can open up new areas of research. The objective set out is the comprehensive and systematic literature review of research published between 2018 and 2022, these studies were extracted from several critically important academic sources, with a total of 73 short articles selected. The findings also open up possible research areas for machine learning in knowledge management to generate a competitive advantage in financial institutions.Campus Lima Centr

    Towards algorithm auditing: managing legal, ethical and technological risks of AI, ML and associated algorithms

    Get PDF
    ­Business reliance on algorithms is becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include Google’s AI algorithm for photo classification mistakenly labelling a black couple as gorillas in 2015 (Gebru 2020 In The Oxford handbook of ethics of AI, pp. 251–269), Microsoft’s AI chatbot Tay that spread racist, sexist and antisemitic speech on Twitter (now X) (Wolf et al. 2017 ACM Sigcas Comput. Soc. 47, 54–64 (doi:10.1145/3144592.3144598)), and Amazon’s AI recruiting tool being scrapped after showing bias against women. In response, governments are legislating and imposing bans, regulators fining companies and the judiciary discussing potentially making algorithms artificial ‘persons’ in law. As with financial audits, governments, business and society will require algorithm audits; formal assurance that algorithms are legal, ethical and safe. A new industry is envisaged: Auditing and Assurance of Algorithms (cf. data privacy), with the remit to professionalize and industrialize AI, ML and associated algorithms. The stakeholders range from those working on policy/regulation to industry practitioners and developers. We also anticipate the nature and scope of the auditing levels and framework presented will inform those interested in systems of governance and compliance with regulation/standards. Our goal in this article is to survey the key areas necessary to perform auditing and assurance and instigate the debate in this novel area of research and practice

    Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms

    Get PDF
    Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms

    Cybersecurity Technologies for Protecting Social Medical Data in Public Healthcare Environments

    Get PDF
    The growing digitization of healthcare systems has made safeguarding sensitive social medical data a crucial priority. The primary objective of this study is to utilize sophisticated cybersecurity technologies, particularly machine learning (ML) algorithms, to improve the security of Electronic Health Records (EHR) in public healthcare settings. The proposed approach presents an innovative technique that merges the advantages of isolation forest and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [IF-DBSCAN]algorithms for anomaly detection, achieving an impressive accuracy rate of 0.968. The study examines the difficulties presented by the distinct characteristics of healthcare data, which includes both medical and social information. The inadequacy of conventional security measures has necessitated the incorporation of sophisticated machine learning algorithms to detect abnormal patterns that may indicate potential security breaches. The hybrid model, which combines isolation forest and DBSCAN, seeks to overcome the constraints of current anomaly detection techniques by offering a resilient and precise solution specifically designed for the healthcare domain. The isolation forest is highly proficient at isolating anomalies by leveraging the inherent attributes of normal data, whereas DBSCAN is adept at detecting clusters and outliers within densely populated data regions. The integration of these two algorithms is anticipated to augment the overall anomaly detection capabilities, thereby strengthening the cybersecurity stance of healthcare systems. The proposed method is subjected to thorough evaluation using real-world datasets obtained from public healthcare environments. The accuracy rate of 0.968 demonstrates the effectiveness of the hybrid approach in accurately differentiating between normal and anomalous activities in EHR data. The research makes a valuable contribution to the field of cybersecurity in healthcare and also tackles the increasing concerns related to the privacy and reliability of social medical data. This research introduces an innovative method for protecting social medical data in public healthcare settings. It utilizes a sophisticated combination of isolation forest and DBSCAN to detect anomalies. The method\u27s high accuracy in the evaluation highlights its potential to greatly improve cybersecurity in healthcare systems, thereby guaranteeing the confidentiality and integrity of sensitive patient information. DOI: https://doi.org/10.52710/seejph.48

    "General Conclusions: From Crisis to A Global Political Economy of Freedom"

    Get PDF
    In this chapter I sum up the basic problems for a new theory of 21st century financial crises in light of the Asian and other subsequent crises. My conclusion is that there are indeed deep structural causes at work in the global markets that affect the political economy of countries and regions. Methodologically, new concepts, models and theories are constructed, at ;least partially, to conduct further meaningful empirical work leading to relevant policy conclusions. This book belongs to the beginning of intellectual efforts in this direction. Political economic analyses at the country level, CGE modeling within a new theoretical framework, and neural network approach to learning in a bounded rationality framework point to a role for reforms at the state, firm and regional level. A new type of institutional analysis called the 'extended panda's thumb approach' leads to the recommendation that path dependent hybrid structures need to be constructed at the local, national, regional and global level to lead to a new global financial architecture for the prevention--- and if prevention fails--- management of financial crises.

    The Role of the Financial Services Authority in Handling Online Money Loan Offers by Information Technology-Based Joint Funding Services (LPBBTI)

    Get PDF
    The Financial Services Authority is a financial services supervisory institution that has the authority to regulate, supervise, examine, and investigate financial services institutions including online money loans by information technology-based joint funding services. The formulation of this research problem: (1) What is the role of the Financial Services Authority in handling online money loan offers by information technology- based joint funding services? (2) what efforts have the Financial Services Authority made in handling online money loan offers by information technology-based joint funding services? The research used is normative legal research using the descriptive analysis method. This research uses primary, secondary, and tertiary legal materials. The results of this study can be concluded: (1) The role of the Financial Services Authority in handling online money loan offers by information technology-based joint funding services is to issue regulations to regulate and supervise. (2) The efforts that have been made by the Financial Services Authority in handling online money loan offers by information technology-based joint funding services are conducting education, urging the public to always check the legality of online loans when receiving offers, issuing new services to improve consumer protection, namely Consumer Support Technology (CST) in the form of Chatbot CST and the Financial Services Authority together with 11 ministries coordinating to form the Investment Alert Task Force (SWI) to eradicate illegal online loans that are troubling and detrimental to the community

    The Illusion of the Perpetual Money Machine

    Full text link
    We argue that the present crisis and stalling economy continuing since 2007 are rooted in the delusionary belief in policies based on a "perpetual money machine" type of thinking. We document strong evidence that, since the early 1980s, consumption has been increasingly funded by smaller savings, booming financial profits, wealth extracted from house price appreciation and explosive debt. This is in stark contrast with the productivity-fueled growth that was seen in the 1950s and 1960s. This transition, starting in the early 1980s, was further supported by a climate of deregulation and a massive growth in financial derivatives designed to spread and diversify the risks globally. The result has been a succession of bubbles and crashes, including the worldwide stock market bubble and great crash of October 1987, the savings and loans crisis of the 1980s, the burst in 1991 of the enormous Japanese real estate and stock market bubbles, the emerging markets bubbles and crashes in 1994 and 1997, the LTCM crisis of 1998, the dotcom bubble bursting in 2000, the recent house price bubbles, the financialization bubble via special investment vehicles, the stock market bubble, the commodity and oil bubbles and the debt bubbles, all developing jointly and feeding on each other. Rather than still hoping that real wealth will come out of money creation, we need fundamentally new ways of thinking. In uncertain times, it is essential, more than ever, to think in scenarios: what can happen in the future, and, what would be the effect on your wealth and capital? How can you protect against adverse scenarios? We thus end by examining the question "what can we do?" from the macro level, discussing the fundamental issue of incentives and of constructing and predicting scenarios as well as developing investment insights.Comment: 27 pages, 18 figures (Notenstein Academy White Paper Series

    Customer centricity and product innovation in banking

    Get PDF
    The digital transformation of "classic" banks is impossible without a reassessment of their previous strategic priorities. To fit into the "new normal", they must adjust their business and operating models to dynamically changing customer needs and experiences - the most authoritative critical parameter for "manufacturing" and offering products/services. At the same time, however, product innovation in banking faces a number of obstacles

    Customer retention

    Get PDF
    A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfillment of the requirements for the degree of Master of Science in Engineering. Johannesburg, May 2018The aim of this study is to model the probability of a customer to attrite/defect from a bank where, for example, the bank is not their preferred/primary bank for salary deposits. The termination of deposit inflow serves as the outcome parameter and the random forest modelling technique was used to predict the outcome, in which new data sources (transactional data) were explored to add predictive power. The conventional logistic regression modelling technique was used to benchmark the random forest’s results. It was found that the random forest model slightly overfit during the training process and loses predictive power during validation and out of training period data. The random forest model, however, remains predictive and performs better than logistic regression at a cut-off probability of 20%.MT 201
    • 

    corecore