491 research outputs found

    Would credit scoring work for Islamic finance? A neural network approach

    Get PDF
    Purpose – The main aim of this paper is to distinguish whether the decision making process of the Islamic financial houses in the UK can be improved through the use of credit scoring modeling techniques as opposed to the currently used judgmental approaches. Subsidiary aims are to identify how scoring models can reclassify accepted applicants who later are considered as having bad credit and how many of the rejected applicants are later considered as having good credit; and highlight significant variables that are crucial in terms of accepting and rejecting applicants which can further aid the decision making process. Design/methodology/approach – A real data-set of 487 applicants are used consisting of 336 accepted credit applications and 151 rejected credit applications make to an Islamic finance house in the UK. In order to build the proposed scoring models, the data-set is divided into training and hold-out sub-set. The training sub-set is used to build the scoring models and the hold-out sub-set is used to test the predictive capabilities of the scoring models.70 percent of the overall applicants will be used for the training sub-set and 30 percent will be used for the testing sub-set. Three statistical modeling techniques namely Discriminant Analysis (DA), Logistic Regression (LR) and Multi-layer Perceptron (MP) neural network are used to build the proposed scoring models. Findings – Our findings reveal that the LR model has the highest Correct Classification (CC) rate in the training sub-set whereas MP outperforms other techniques and has the highest CC rate in the hold-out sub-set. MP also outperforms other techniques in terms of predicting the rejected credit applications and has the lowest Misclassification Cost (MC) above other techniques. In addition, results from MP models show that monthly expenses, age and marital status are identified as the key factors affecting the decision making process. Research limitations/implications – Although our sample is small and restricted to an Islamic Finance house in the UK the results are robust. Future research could consider enlarging the sample in the UK and also internationally allowing for cultural differences to be identified. The results indicate that the scoring models can be of great benefit to Islamic finance houses in regards to their decision making processes of accepting and rejecting new credit applications and thus improve their efficiency and effectiveness. Originality/value –Our contribution is the first to apply credit scoring modeling techniques in Islamic Finance. Also in building a scoring model our application applies a different approach by using accepted and rejected credit applications instead of good and bad credit histories. This identifies opportunity costs of misclassifying credit applications as rejected

    Feature selection in a credit scoring model

    Get PDF
    This article belongs to the Special Issue Mathematics and Mathematical Physics Applied to Financial Markets.This paper proposes different classification algorithms—logistic regression, support vector machine, K-nearest neighbors, and random forest—in order to identify which candidates are likely to default for a credit scoring model. Three different feature selection methods are used in order to mitigate the overfitting in the curse of dimensionality of these classification algorithms: one filter method (Chi-squared test and correlation coefficients) and two wrapper methods (forward stepwise selection and backward stepwise selection). The performances of these three methods are discussed using two measures, the mean absolute error and the number of selected features. The methodology is applied for a valuable database of Taiwan. The results suggest that forward stepwise selection yields superior performance in each one of the classification algorithms used. The conclusions obtained are related to those in the literature, and their managerial implications are analyzed

    An insight into the experimental design for credit risk and corporate bankruptcy prediction systems

    Get PDF
    Over the last years, it has been observed an increasing interest of the finance and business communities in any application tool related to the prediction of credit and bankruptcy risk, probably due to the need of more robust decision-making systems capable of managing and analyzing complex data. As a result, plentiful techniques have been developed with the aim of producing accurate prediction models that are able to tackle these issues. However, the design of experiments to assess and compare these models has attracted little attention so far, even though it plays an important role in validating and supporting the theoretical evidence of performance. The experimental design should be done carefully for the results to hold significance; otherwise, it might be a potential source of misleading and contradictory conclusions about the benefits of using a particular prediction system. In this work, we review more than 140 papers published in refereed journals within the period 2000–2013, putting the emphasis on the bases of the experimental design in credit scoring and bankruptcy prediction applications. We provide some caveats and guidelines for the usage of databases, data splitting methods, performance evaluation metrics and hypothesis testing procedures in order to converge on a systematic, consistent validation standard.This work has partially been supported by the Mexican Science and Technology Council (CONACYT-Mexico) through a Postdoctoral Fellowship [223351], the Spanish Ministry of Economy under grant TIN2013-46522-P and the Generalitat Valenciana under grant PROMETEOII/2014/062

    Data Mining of Smart WiFi Thermostats to Develop Multiple Zonal Dynamic Energy and Comfort Models of a Residential Building

    Get PDF
    Smart WiFi thermostats have gained an increasing foothold in the residential building market. The data emerging from from these thermostats is transmitted to the cloud. Companies like Nest and Emerson Climate Technologies are attempting to use this data to add value to their customers. This overarching theme establishes the foundation for this research. This research seeks to utilize WiFi data from the Emerson Climate Technologies’ Helix test house to: develop a dynamic model to predict real time cooling demand and then apply this model to running ‘what-if’ thermostat scheduling scenarios with the ultimate goal of reducing energy use in the residence or responding to high demand events. The Helix residence, with two thermostat controlled zones for each floor, exists in a temperature/humidity controlled external environment, which can be controlled to simulate environmental conditions present in the hottest to coldest climates. A Design of Experiment approach was used to establish data needed for the model. The control variables in the experiments included: levels for the exterior environmental schedule and levels for the interior setpoint schedules for both zones. Simply, this data enabled data collection for constant or cyclical exterior environmental conditions and constant and scheduled interior setpoint conditions, not necessarily the same for each floor. From this data, a regression tree approach (Random Forest) was used to develop models to predict the room temperature as measured by each thermostat, as well as the cooling status for each zone. The models developed, when applied to validation data (e..g, data not employed in training the model) R2 values of greater than 0.95. Then, the models developed were utilized for various ‘What if’ scenarios. Two such scenarios were considered. The first looked at the possibility of using the model to estimate comfort in a demand response event, e.g., when the grid manager calls for demand reduction. In this case, the heat pump providing cooling would be powered off for some time. The second scenario sought to quantify the cooling savings from use of higher thermostat setpoint during simulated non-occupied periods and for different exterior temperature schedules. The ‘What if’ predictions are validated with experimental data, thus demonstrating the value of the data-driven, dynamic data solely from smart WiFi thermostat information

    Modeling the Example Life-Cycle in an Online Classification Learner

    Get PDF
    Abstract. An online classification system maintained by a learner can be subject to latency and filtering of training examples which can impact on its classification accuracy especially under concept drift. A life-cycle model is developed to provide a framework for studying this problem. Meta data emerges from this model which it is proposed can enhance online learning systems. In particular, the definition of the time-stamp of an example, as currently used in the literature, is shown to be problematic and an alternative is proposed

    A NOVEL SPLIT SELECTION OF A LOGISTIC REGRESSION TREE FOR THE CLASSIFICATION OF DATA WITH HETEROGENEOUS SUBGROUPS

    Get PDF
    A logistic regression tree (LRT) is a hybrid machine learning method that combines a decision tree model and logistic regression models. An LRT recursively partitions the input data space through splitting and learns multiple logistic regression models optimized for each subpopulation. The split selection is a critical procedure for improving the predictive performance of the LRT. In this paper, we present a novel separability-based split selection method for the construction of an LRT. The separability measure, defined on the feature space of logistic regression models, evaluates the performance of potential child models without fitting, and the optimal split is selected based on the results. Heterogeneous subgroups that have different class-separating patterns can be identified in the split process when they exist in the data. In addition, we compare the performance of our proposed method with the benchmark algorithms through experiments on both synthetic and real-world datasets. The experimental results indicate the effectiveness and generality of our proposed method

    Towards Learning Representations in Visual Computing Tasks

    Get PDF
    abstract: The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos. The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss. In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • 

    corecore