103,027 research outputs found

    A Practical Approach to Credit Scoring

    Get PDF
    This paper proposes a DEA-based approach to credit scoring. Compared with conventional models such as multiple discriminant analysis, logistic regression analysis, and neural networks for business failure prediction, which require extra a priori information, this new approach solely requires ex-post information to calculate credit scores. For the empirical evidence, this methodology was applied to current financial data of external audited 1061 manufacturing firms comprising the credit portfolio of one of the largest credit guarantee organizations in Korea. Using financial ratios, the methodology could synthesize a firm’s overall performance into a single financial credibility score. The empirical results were also validated by supporting analyses (regression analysis and discriminant analysis) and by testing the model’s discriminatory power using actual bankruptcy cases of 103 firms. In addition, we propose a practical credit rating method using the predicted DEA scores

    Operational multidimensional character of the credit-scoring models, applicable to the BSE listed companies, section of equipments

    Get PDF
    This paper focuses on the two-side practical approach of credit-scoring models: corporate default prediction and corporate economical performance hierarchization. Taking as database the most 9 representative listed companies on the Bucharest Stock Exchange within the equipment field, the case-study implies the elaboration using Linear Discriminant Analysis of a specific credit scoring model, especially adapted to the afore-mentioned sample of firms, and the valorization of this credit-scoring model as a corporate default predictor and also as an economical performance hierarchization tool..default valuation, default predictors, credit-scoring model, cluster analysis.

    Learning Fair Scoring Functions: Bipartite Ranking under ROC-based Fairness Constraints

    Get PDF
    Many applications of AI involve scoring individuals using a learned function of their attributes. These predictive risk scores are then used to take decisions based on whether the score exceeds a certain threshold, which may vary depending on the context. The level of delegation granted to such systems in critical applications like credit lending and medical diagnosis will heavily depend on how questions of fairness can be answered. In this paper, we study fairness for the problem of learning scoring functions from binary labeled data, a classic learning task known as bipartite ranking. We argue that the functional nature of the ROC curve, the gold standard measure of ranking accuracy in this context, leads to several ways of formulating fairness constraints. We introduce general families of fairness definitions based on the AUC and on ROC curves, and show that our ROC-based constraints can be instantiated such that classifiers obtained by thresholding the scoring function satisfy classification fairness for a desired range of thresholds. We establish generalization bounds for scoring functions learned under such constraints, design practical learning algorithms and show the relevance our approach with numerical experiments on real and synthetic data.Comment: 35 pages, 13 figures, 6 table

    The Effects of Different Scoring Methodologies on Item and Test Characteristics of Technology-Enhanced Items

    Get PDF
    Technology-enhanced (TE) item types have recently gained attention from educational test developers as a way to test constructs with higher fidelity. However, most research has focused on developing new TE item types, and less on researching best practices for scoring these new item types. The purpose of this study was to analyze the effect of adjusting scoring strategies of TE items on item and test characteristics. Descriptive statistics as well as tests of statistical significance were reported when appropriate. Additionally, figures representing the differences in test information and fit across forms were created to help show consistency in scoring effects. Results were consistent with prior research into differences between dichotomous and polytomous scoring strategies. Results indicate that the two best strategies for scoring TE items are partial-credit scoring and testlet response theory. The worst approach to scoring TE items is to score them as correct-only. Results of this study add to the research literature, as well as provides a practical guide to test developers when deciding which scoring strategy to use with new TE item development

    Testing the use of grammar: Beyond grammatical accuracy

    Get PDF
    Udostępnienie publikacji Wydawnictwa Uniwersytetu Ɓódzkiego finansowane w ramach projektu „DoskonaƂoƛć naukowa kluczem do doskonaƂoƛci ksztaƂcenia”. Projekt realizowany jest ze ƛrodków Europejskiego Funduszu SpoƂecznego w ramach Programu Operacyjnego Wiedza Edukacja Rozwój; nr umowy: POWER.03.05.00-00-Z092/17-00

    Comment: Classifier Technology and the Illusion of Progress--Credit Scoring

    Full text link
    Comment on Classifier Technology and the Illusion of Progress--Credit Scoring [math.ST/0606441]Comment: Published at http://dx.doi.org/10.1214/088342306000000051 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Human Computation and Economics

    Get PDF
    This article is devoted to economical aspects of Human Computation (HC) and to perspectives of HC in economics. As of economical aspects of HC, it is first observed that much of what makes HC systems effective is economical in nature suggesting that complexity being reconsidered as a “HC complexity” and the conception of efficient HC systems as a “HC economics”. This article also points to the relevance of HC in the development of standard software and to the importance of competition in HC systems. As of HC in economics, it is first argued that markets can be seen as HC systems avant la lettre. Looking more closely at financial markets, the article then points to a speed differential between transactions and credit risk awareness that compromises the efficiency of financial markets. Finally, a HCbased credit risk rating is proposed that, overcoming the afore mentioned speed differential, holds promise for better functioning financial markets

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Forecasting creditworthiness in retail banking: a comparison of cascade correlation neural networks, CART and logistic regression scoring models

    Get PDF
    The preoccupation with modelling credit scoring systems including their relevance to forecasting and decision making in the financial sector has been with developed countries whilst developing countries have been largely neglected. The focus of our investigation is the Cameroonian commercial banking sector with implications for fellow members of the Banque des Etats de L’Afrique Centrale (BEAC) family which apply the same system. We investigate their currently used approaches to assessing personal loans and we construct appropriate scoring models. Three statistical modelling scoring techniques are applied, namely Logistic Regression (LR), Classification and Regression Tree (CART) and Cascade Correlation Neural Network (CCNN). To compare various scoring models’ performances we use Average Correct Classification (ACC) rates, error rates, ROC curve and GINI coefficient as evaluation criteria. The results demonstrate that a reduction in terms of forecasting power from 15.69% default cases under the current system, to 3.34% based on the best scoring model, namely CART can be achieved. The predictive capabilities of all three models are rated as at least very good using GINI coefficient; and rated excellent using the ROC curve for both CART and CCNN. It should be emphasised that in terms of prediction rate, CCNN is superior to the other techniques investigated in this paper. Also, a sensitivity analysis of the variables identifies borrower’s account functioning, previous occupation, guarantees, car ownership, and loan purpose as key variables in the forecasting and decision making process which are at the heart of overall credit policy
    • 

    corecore